Maxim S. Shatskih wrote:
>Why is that bad practice?
- Synchronization of accesses to this memory by the driver and by the
app. Named events cause issues on Remote Desktop/TS, and passing the
event handle in IOCTL is not simpler then using the “lots of IRPs”
approach.
As I said, that pretty much depends on the problem you’re cracking down on.
I don’t need to explicitly synchronize my data with the driver, as the
synchronization is *intrinsic* to the data. No events needed, thank you.
This is true for quite a few scenarios I can picture that involve
isochronous streaming.
- User memory access from the driver. Usually, this will require
building a MDL on the buffer and holding it locked for all time the
driver is loaded. Again - not easier then “lots of IRPs”, and provides
no advantages.
I actually need more than one MDL, because I have multiple URBs that each
describe a certain section of the buffer. But IMHO, creating those MDLs in
the driver is no more difficult than managing the complexity involved with
sending numerous DeviceIoControls to the driver and handling the data thus
returned.
And of course, the buffer does not have to be locked for the entire life of
the driver. Instead, you hand the driver a pointer to the large buffer in a
proprietary IOCTL, and teh driver then blithely holds on to the
corresponding IRP until the application cancels the process by issuing
another IOCTL (or else until the app quits). Quite simple really.
driver is loaded. Again - not easier then “lots of IRPs”, and provides
> no advantages.
There is one big big big advantage to using the shared buffer approach:
latency. I have almost *zero* latency between the data arriving in the
driver and the application having access to it. Try to get that with
multiple IRPs floating around, which will always mean that you have (at
least one) context switch before your app sees the data. Not so with a
shared buffer.
Also, I save myself the overhead of initialzing a zillion IRPs (by means of
DeviceIoControl), sending them to the driver, and having the system wait on
each of them to complete.
Where performance and latency are important, I’ll take the shared buffer
approach any time over the “many IRPs” variety. And why, exactly, is that
easier? Considering you have to implement cancellation logic and all?
In fact, the shared memory approach, if implemented properly, will
have more limitations and will require more coding, while having no
advantages over “lots of pending IRPs” model.
I doubt that. What limitations? What extra coding? And I just told you the
advantages.
Burk.
Burkhard Daniel
Software Technologies Group, Inc.
xxxxx@stg.com * http://www.stg.com
fon: +49-179-5319489 fax: +49-179-335319489