(Sorry if this ends up being a double post. I previously posted a response, and then tried to edit it, but that made my post disappear with a warning that it would be “queued for moderation” (?!). I can’t figure out what happened to my original post so I’m reposting just in case.)
@Tim_Roberts said:
I will get spanked for saying this, but EvtIoDeviceControl
only runs in a system thread context if you have enabled SyncronizationScope
on your queue or if you set ExecutionLevel
. If you set SynchronizationScope
to WdfSynchronizationScopeNone
, then your handler will run in the caller context and you can fire-and-forget. It would be unfriendly for filter drivers to alter the context so much.
Thanks, I expected as much. Seems like this would be relying on undocumented behaviour, but I guess one could easily invoke Hyrum’s law given the number of KMDF filter drivers that might already be relying (perhaps unknowingly) on being able to transparently forward any IOCTL, including METHOD_NEITHER
, from within EvtIoDeviceControl
.
Filtering the KS ioctls is complicated. The rules are not documented, and because of that they actually DO change from time to time. Making it even more complicated, there is a filter driver above you called “ksthunk.sys” that actually does map some of the user addresses into kernel mode, so your IRPs might already be converted.
For your own processing, the input buffer is always in UserBuffer
. You can use WdfRequestRetrieveUnsafeUserInputBuffer
to fetch that. If that is a kernel address (meaning the pointer is < 0), then someone has done the mapping work for you. Otherwise, you need to call WdfRequestProbeAndLockUserBufferForRead
. I save the resulting pointer in a request context to fetch later.
If the output buffer has already been converted, it will be in the SystemBuffer
field (WdfRequestWdfGetIrp(Request)->AssociatedIrp.SystemBuffer
). If that is non-zero, you can use it. Otherwise, you need to call WdfRequestRetrieveUnsafeUserOutputBuffer
and call WdfRequestProbeAndLockUserBufferForWrite
on it.
The input buffer contains the “property descriptor”, usually a KSPROPERTY
structure or KSP_PIN
structure. The output buffer contains the “property value type”.
Thanks, I was going to tackle that part next. I appreciate the warning about getting both kinds of buffers - I definitely would not have expected that!
@Doron_Holan said:
Both WDF_REQUEST_SEND_OPTION_SEND_AND_FORGET or WdfRequestFormatRequestUsingCurrentType from EvtIoInCallerContext will send the request down the stack in the same context (the user mode process).
Oh, nice! I didn’t know that was a possibility. That would nicely take care of the problem. I can’t find any documentation that says I can forward a request to the local I/O target from within EvtIoInCallerContext
, but I can’t find any documentation that says I can’t, either… I guess I was worried that I might confuse the framework by neither completing the request nor enqueuing it before returning from the callback.
By the way, it looks like most KMDF filter drivers would benefit from using this approach, because most filter drivers (presumably) want to transparently and generically forward any IOCTL that it does not understand. Since the filter driver doesn’t know what kind of IOCTL it might be called upon to forward, it has to assume the worst case, i.e. METHOD_NEITHER
, and therefore forward any unknown IOCTLs in EvtIoInCallerContext
, not EvtIoDeviceControl
. If that conclusion is correct, then there’s a number of sample KMDF filter drivers out there that need updating, and possibly a number of real drivers too…
You mention buffer swapping, is that a feature you want the filter to implement or as a way to compensate for context problems you are anticipating?
The latter. I thought I would be forced to do it, but from this discussion it looks like that won’t actually be necessary.
If you can describe the higher level functionality and problem you are trying to solve in the filter, you will get better, more concrete guidance.
The business logic I want to implement in my filter driver is extremely trivial. I only want to listen for a very specific kind of IOCTL (IOCTL_KS_PROPERTY
), and if I get one, register a completion routine that makes a small change to the contents of the output buffer before it’s returned to the application. That’s it. All other requests should be forwarded transparently, fire-and-forget style.
For instance, EvtWdfDeviceWdmIrpPreprocess is also called in the sender’s context and you can accomplish the same inspect + fire & forget from this callback without the WDFREQUEST abstraction. But without understanding what you are trying to do, it is hard to recommend one over the other.
Thanks. I doubt that will be necessary - looks like I can achieve my goals without dealing with WDM directly. I’ll keep that option in mind though in case I’m mistaken.
@Doron_Holan said:
Stopping the queue (manually or for a power managed queue, during a device power state transition) will also change the context of EvtIoDeviceControl, even for a ScopeNone WDFQUEUE.
My understanding is that filter drivers only use non-power-managed queues. So, assuming that I never stop the queue, EvtIoDeviceControl
should run in the original thread context, right? Though obviously your proposed approach of using EvtIoInCallerContext
still seems cleaner since it would make that a strong guarantee instead of relying on undocumented behaviour.