Fw: RE: MdlAddress->MappedSystemVa == 0 and MdlAddress->StartVa == 0

Thanks for your explanation, Peter and Mark.
I realize that where is the difference now.

Thanks again!!

We are seeking ways to add windows performance, likes utilizing
storage access effectively. We had ever done “write cache” on SCSI
miniport driver. Now IDE storage stack on windows XP is changed to an
independant port driver. We suspect that at every DMA request the max
payload could be 128K, but practically the max transfer is only 0x80 sector
at our observation. So I would like to write a filter driver to modify its
it may be the read prefetch.

----- Original Message -----
From: “Peter Wieland”
To: “NT Developers Interest List”
Sent: Friday, August 30, 2002 10:03 PM
Subject: [ntdev] RE: MdlAddress->MappedSystemVa == 0 and MdlAddress->StartVa
== 0

one of the memory manager’s page-scrubbing threads likes to build MDLs
that don’t have any virtual address and send them to the storage stack
once in a while. There are probably a couple more places that it comes
from too.

you should pretty much ignore StartVa and the MDL virtual address at the
storage driver level. Chances are you aren’t running in the context of
caller anymore so it’s useless. The storage stack uses it for legacy
reasons but the class and port drivers don’t access it directly (either
data is transferred using DMA or the port driver calls
MmGetSystemAddressForMdlSafe on the miniport’s behalf).

If you want a pointer to the databuffer then you should always call
MmGetSystemAddressForMdlSafe. It checks the MDL flags to see if the MDL
has been mapped into the kernel VA space and, if not, calls the routine
to map it.

on a SCSI note:

srb->DataBuffer is sometimes an offset into the MDL’s data buffer. the
class drivers do this rather than allocating subordinate MDLs. If you
want to map the data buffer for the MDL you need to do something like

PUCHAR buffer = MmGetSystemAddressForMdlSafe(mdl)

buffer += Srb->DataBuffer - MmGetVirtualAddressForMdl(mdl)

it can be a pain in the butt, but it saves us a pool allocation when we
need to break up requests.