Dose WDF have any concept like minifilter communication ports?

What should one do for user mode communication in WDF model?

  1. The OSR inverted call article.
  2. There seems to be a solution introduced in toaster sample named “sideband filter”
  3. The sideband filter source documentation say there is an alternate method introduced in kbfilter
  4. some built in communication mechanism I am not aware of?

My need is kernel querying user-mode and receive the answer synchronously.

Inverted call is the way to do that. All the other mechanisms you mentioned are just wrapped around inverted call.

It’s not complicated. The user-mode app just sends down a set of ioctls that get queued in the driver. Later, when the driver needs to communicate, it just completes one of the waiting requests.

Thanks for the explanation. A side question Since these ioctls are probably developer-defined and have no other use, they probably won’t be processed by any layered driver or filter between user mode and destination driver. Why queue multiple pended IRPs instead of just have one pended? Is there such big difference in performance? Or I am completely wrong and that’s not even a performance related design?

It depends on your design. If things don’t happen quickly, then one at a time is fine. But what happens if the driver has something to say, and the application hasn’t submitted a new empty request? You either have to discard the notification, or remember it for later. “Remembering it for later” is way more complicated than just letting the app have several requests waiting.

Thanks Tim.

I found this on OSR.
Posting it for those who hit this question.
https://www.osr.com/nt-insider/2013-issue1/inverted-call-model-kmdf/

I am implementing a Small user-kernel communication library based on the OSR “inverted call model kmdf” sample combined with Microsoft toaster side-band filter to overcome the multiple driver instance issue.
The main question I got is:
I have to make sure I always have a few pended IO requests. Actually I have to make sure I never run out of that pended IO requests in kernel.
How am I supposed to keep count of these requests and refill the IoQueue with more pending IOs?

I found WdfIoQueueGetState function that gives me the request count present in the Queue. I can complete one of those requests with special command that tells the user mode to send some IO requests when I am running low on queue count.
But it could be nicer if I could get the pended request count in user mode in OSR “inverted call model kmdf” or just keep count of them in user mode (in case IO errors do not make my counting wrong)
What do you suggest?

I suggest you have your user-mode app send up N Requests, and as soon as one is completed (and before you process it) send another up to the driver. Then you’ll always have N Requests in progress at the driver.

Remember, the RequestA you pend can have a structure to the output buffer… so you could always return a count of currently pending Requests if you wanted to. Bit, again, if you always just send one as soon as you’ve gotten a completion, you shouldn’t need that.

Peter

How am I supposed to keep count of these requests and refill the IoQueue with more pending IOs?

I’m still not sure you understand the concept here. The driver doesn’t do any counting. It’s up to the application. The application has to know how long it takes for it to turn the request around, and how often new interrupts will arrive. If the app can turn a request around quickly enough, then it might only need two pending requests. If interrupts can come in bursts, then you might need more.

There’s no point in counting. Your app will decide on the number of requests it needs to avoid going dry. At any given point in time, any given request is either waiting in the driver, or temporarily in the application being processed, and about to be resent to the driver.

Here’s a colorful metaphor. Picture the driver and the application as two people on either side of a wall. At startup time, the application throws five empty boxes over the wall. The driver stacks them up neatly and waits. Eventually, the driver has something to say. It fills up a box, and throws it back over the wall. The application empties the box, and throws it back to the driver. If the driver has something else to say before that first box comes back, it fills up a second box and throws it over. The key is to make sure the application returns a box before the driver runs out. Again, if the process of emptying a box and throwing it back is quick, maybe you only need 2.

And no matter how good things are, you ALWAYS need to handle the possibility that an interrupt arrives when there are no pending requests. You have to decide how to handle that. Do you just drop the interrupt? Do you remember an error code and notify the application next time it sends a request? Do you shut things down? Only YOU can decide what works for your design.

Remember that you need to be able to return all the queued up requests when it’s time to shut down.

I’m still not sure you understand the concept here.
I NEARLY understand all of the concept, I think.
The only remaining question is:
Imagine the colorful world. Is there a possibility that some of the boxes (especially in the way back from kernel to user) Get stolen by Some bird In a way that user do not even get wind of that box being lost?
I actually want to know is there any error condition In the way back from kernel to user that the request Is lost and since the user do not receive response Won’t provide another Io?
In this case what Peter said will change from a you “you could always return” to “you MUST always return”. Right?
Remember, the RequestA you pend can have a structure to the output buffer… so you could always return a count of currently pending Requests if you wanted to. Bit, again, if you always just send one as soon as you’ve gotten a completion, you shouldn’t need that.

BTW, based on the sideband filter sample I created a Control device other than the filter driver device for user communication. I think this way there won’t be any completing or filtering out for my notification IOs. Is it possible for the communication IOs to be filtered out if I send notification IOs(I mean the IO to be pended not those used in sample to simulate kernel event) to the driver object like it is done in OSR inverted sample

There is no case where (given properly written code and the continued existence of global concepts like gravity) you can call ReadFile, get back status pending, and not get notification of the I/O Request completing.

It HAS to be this way, right? You can’t call ReadFile synchronously and sometimes just never return. It’s exactly the same… it’s just a matter of where the wait for the I/O to complete takes place. For synchronous notification, your thread waits in the I/O Manager before returning for the call to ReadFile. For asynchronous notification, the I/O Manager returns without waiting and you wait (or otherwise get the notification) when you decide you want to.

But in no case can I/Os just “go missing.”

Peter

1 Like

BTW, based on the sideband filter sample I created a Control device other than the filter driver device for user communication. I think this way there won’t be any completing or filtering out for my notification IOs. Is it possible for the communication IOs to be filtered out if I send notification IOs(I mean the IO to be pended not those used in sample to simulate kernel event) to the driver object like it is done in OSR inverted sample

Who do you think is going to be doing filtering? If you are an upper filter, then you get first shot at the IRPs, before they flow into the real driver. It’s possible someone could install a filter above you, but the rules for filters are that you pass down anything you don’t understand. If you are a lower filter, then you need to think of alternatives, because the primary FDO does not have the same “pass down” rule.

The point is, you shouldn’t be guessing here. You KNOW what requests you can get and what requests you can’t, based on where you are in the device stack. That’s all knowable. There are no rogue players just looking to interfere with I/O.

Thanks for explanations and your time.