Shared memory would be good. You could then use a shared event + mutex, or
a semaphore, to synchronize access to the shared memory segment. You
should, of course, batch your transfers. That is, if you need to send many
transfers to user-mode, then your driver should handle combining them in the
shared memory segment. The reader and writer of that buffer should only
have to deal with the synchronization primitives once, in order to transfer
a large number of messages.
Shared memory will probably get you close to the theoretical maximum,
especially if you are smart with buffer ownership / management. Your goal
should be to reduce the number of buffer copies and lock acquisitions /
releases. Also, it’s easy to set up, is reliable, and is documented. You
will need to be careful to deal with the fact that the section view is
mapped into a particular process, though. If your driver is running in the
context of a different process (and it will be, in many cases!), you can’t
touch the section view.
Which is why you should still consider using pended IRPs. Your application
can keep a pool of I/O control requests pended to your driver. The best
number of IRPs could be found through experimentation, but a good
off-the-cuff number is 8 buffers, each containing (say) 64K. Use direct
I/O, not buffered, for your requests. This will get you very close to the
shared memory performance.
Seriously, don’t underestimate the IRP path. And with small buffer sizes,
even buffered I/O can be quite fast. As always, measure measure measure!
Use a profiler before you assume that solution X “must be better” than
solution Y.
– arlie
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Petr Kurtin
Sent: Wednesday, December 14, 2005 6:59 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] How to create a named pipe in kernel?
thanks
I’d need to send lot of small structures to user-mode very often (to notify
GUI about actual states on the network: what connection is sending, how
much, etc) - therefore I thought, pending IRPs wouldn’t be the best - as
alternative method, I could use a shared memory between kernel and user
(although there might be some problems with synchronization).
-pk
“Doron Holan”
> wrote in message news:xxxxx@ntdev news:xxxxx …
Communication over a pipe uses irps as well, you just don’t see them.
Actually, I would assume that it would be worse since now both your
writes to the pipe and the app’s reads to the pipe will both result in
pipes being created and sent. Using named pipes in the kernel is not
documented, sticking with the inverted call model is documented (and in
this case, uses one less PIRP per transaction).
d
-----Original Message-----
From: xxxxx@lists.osr.com
mailto:xxxxx
[mailto:xxxxx@lists.osr.com] On Behalf Of Petr Kurtin
Sent: Wednesday, December 14, 2005 2:05 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] How to create a named pipe in kernel?
well, what other methods would you suggest? I use pending IRPs for main
communication between kernel and user mode – but some information will
need
to be send very frequently (e.g. what all new network connections have
been
created, how many data have been transfered - small structures, but it’s
not
needed for IRP wasting)… what do you think?
thanks, Petr</mailto:xxxxx></news:xxxxx>