DeviceIoControl buffer in kernel

Hi,

The buffer sent from user space using DeviceIoControl is eventually
getting to kernel as a non paged pool buffer.

However, does anybody know if this buffer is also physically contiguous
? If not, is there a way to force it to be ?

Thanks.

The received buffer may or may not be physically contiguous. There is no way to force it to be physically contiguous.

–Mark Cariddi
OSR, Open Systems Resources, Inc.

From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@emc.com
Sent: Thursday, January 13, 2011 9:02 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] DeviceIoControl buffer in kernel

Hi,

The buffer sent from user space using DeviceIoControl is eventually getting to kernel as a non paged pool buffer.
However, does anybody know if this buffer is also physically contiguous ? If not, is there a way to force it to be ?

Thanks.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

No it is not physically contiguous, and assuming you are talking
BUFFERED_IO it probably isn’t too smart to make it more than 4KB since
that is the size where DIRECT_IO starts becoming faster. AFAIK there
is no way to force user space allocations to be physically contiguous.
Requiring contiguous physical memory is a bad idea even in the kernel,
since it is too easy to get fragmented not be able to get a buffer when
needed.

Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr

xxxxx@emc.com” wrote in message
news:xxxxx@ntdev:

> Hi,
>
>
>
> The buffer sent from user space using DeviceIoControl is eventually
> getting to kernel as a non paged pool buffer.
>
> However, does anybody know if this buffer is also physically contiguous
> ? If not, is there a way to force it to be ?
>
>
>
> Thanks.

Well, that’s true depending on the transfer type you specify as part of the IOCTL definition. It MIGHT be, or it might NOT be. If the IOCTL uses METHOD_BUFFERED, the buffer will be in non-paged pool. If the IOCTL uses METHOD_IN_DIRECT, then the original requestor’s data buffer will be used.

IF the IOCTL uses METHOD_BUFFERED, the buffer may or may not be physically contiguous, as Mark said.

But the REAL question is: Why would you care? It’s hard to think of a legitimate reason you’d want to know if the buffer used in a METHOD_BUFFERED IOCTL is physically contiguous. Perhaps you’ll tell us what it is you’re trying to do, and why you care about physical contiguity?

Hmmmm… I’m afraid I have to differ with Mr. Burn here, at least slightly. The 1 page trade-off certainly WAS correct in the past. THESE days, however, with much faster CPU speeds, it’s hard to really know WHAT the trade-off is between METHOD_BUFFERED and METHOD_IN_DIRECT. And the trade-off is, in fact, a rather complex one: METHOD_BUFFERED impacts CPU time in the context of the thread, whereas METHOD_xxx_DIRECT impacts overall system performance in terms of TLB invalidations. So, like so many things… when it comes to deciding what the maximum appropriate size is for an IOCTL buffer before moving from buffered to direct, the only real answer is “it depends.”

Peter
OSR

Why would method_direct impact system performance in terms of TLB invalidations? Only when you don’t do DMA and map the buffer to the system space instead? When the buffer is being mapped, no invalidation is necessary; only when it’s unmapped, at which time it’s done (in the latest kernels, at least) lazily in batches.

Yes… which would be the only time you’d be comparing Direct with Buffered, right?

Yup. But it’s still has to be done, right?

Peter
OSR

The MmAllocateContiguousMemory routine allocates a range of physically contiguous, nonpaged memory and maps it to the system address space.

PVOID
MmAllocateContiguousMemory(
__in SIZE_T NumberOfBytes,
__in PHYSICAL_ADDRESS HighestAcceptableAddress
);

OK… but sorry… why is that relevant to this thread? I must be missing your point.

Peter
OSR

>However, does anybody know if this buffer is also physically contiguous ?

No.

The only physically contiguous buffers are the ones allocated by ->AllocateCommonBuffer or HalAllocateCommonBuffer. And, if there is an IOMMU, then even these buffers are not such - they are bus-side-contiguous.

Windows is built under a strong idea that “DMA is the only need for physically contiguous buffers”.

There are MmAllocateContiguous/NonCachedMemory calls, but they are only intended for DMA adapter object writers to implement ->AllocateCommonBuffer. Using them in your own code to implement a DMA common buffer is a wrong idea - you don’t know the correct requirements and thus the correct parameter values for the Mm call, only the adapter object (owned by pci.sys usually) knows. So, if you use Mm function directly, then your code will probably break dependent on IOMMU/lack of IOMMU and such.

Hardware IOMMUs are rare (AGP GART is maybe the only wide-used implementation for usual PCs), but there were some rumours that, due to major MS’s effort of “Windows must be a good guest OS”, they have probably added IOMMU support to Windows, for hardware-less VM case, just to simplify creation of emulated hardware in the guest, to simplify the DMA engine, since the guest OS is assumed to support IOMMU.

So, use the DMA APIs - IoMapTransfer, ->GetScatterGatherList etc. They will map any MDL to DMA space and give you the SGL, you will just need to convert it to the format of your hardware and submit.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

This is a very old thread, but I found an information that might be helpful. On MSFT page there is info that when the BUFFERED I/O is used the I/O manager allocates a contigous memory from non-paged pool, but using BUFFERED I/O for large transfer is not recommended since such large allocation may fail:

https://learn.microsoft.com/en-us/windows-hardware/drivers/kernel/using-buffered-i-o :

Drivers that transfer large amounts of data at a time, in particular, drivers that do multipage transfers, should not attempt to use buffered I/O. As the system runs, nonpaged pool can become fragmented so that the I/O manager cannot allocate large, contiguous system-space buffers to send in IRPs for such a driver.


I have debugged existing BUFFERED I/O IOCTL on Win11 and confirmed that the memory allocated for DataBuffer is indeed physically contigous.


Wojciech Chojnowski
System Software Architect

I have debugged existing BUFFERED I/O IOCTL on Win11 and confirmed that the memory allocated for DataBuffer is indeed physically contigous.

Hold on. I don’t believe that. Either you were misled, or you are unclear about the difference between physically contiguous and virtually contiguous. A single allocation is ALWAYS virtually contiguous, by definition, but physical space gets fragmented very quickly. Unless the system was within a few minutes of booting, no one even tries to make those buffers physically contiguous. As you say, the typical buffered ioctl buffer is only a page or two, so perhaps you might accidentally see a contiguous pair, but NO ONE makes any kind of promise about that.

Also, this site strongly discourages “necroposting” – resurrecting long dead threads.

Let me enlighten the necroposter, before I lock the thread:

The memory that is used for METHOD_BUFFERD IOCTLs is not now, and has never been, guaranteed to be physically contiguous. It might be as a matter of coincidence. It will always be virtually contiguous.

In addition, let me add that whether the memory is physically contiguous or not is not helpful, useful, or even relevant. You can’t get its physical address and DMA to it. You’ll need to build an MDL and push it through the Windows DMA API no matter what.

So… definitively… no. Not guaranteed to be physically contiguous. Just as Mr. Roberts correctly asserts.

Peter

This necroposted thread is locked.