We currently talk to some external networked hardware by providing the user with a custom UI to access its “file system” (via an internal API). What we’d like to achieve is to integrate the hardware onto a mounted drive and act like regular storage device accessible by explorer and the command line. Essentially making it transparent as to its origins. Namespace extensions would get us the Explorer functionality, but this does not solve the command prompt requirements and overall transparency that we’d like. To do this, I believe we need to go down the device driver route.
I’ve been looked at mini filters, device namespaces, and device drivers, and I’m not 100% sure which route to pursue next, so I thought you guys may be able to help
This is how I perceive it working:
- In some way mount the device with the mount manager so to get a drive letter mapping to our device.
- Intercept any calls to the device.
- Use kernel/user communication to access the existing (user mode) API to transfer the requested file data from the hardware to shared user/kernel memory, notify back into the kernel driver on completion, which then handles passing the data back to the caller.
- Do not let this request go any further down the drive stack as it makes no sense to anyone else.
So, basically we are intercepting device operations, routing them back out to a user mode process where the request is acted upon, and the results stored in shard memory. The kernel mode section is then notified and it does its thing to transfer the data back to the caller as required.
Looking at mini filter drivers, I believe we can intercept file access to our mounted device, but can we re-direct the operation via user mode code? Do we have to write a full device driver to get this sort of functionality? It sounds similar to file system redirectors, but also perhaps in the realm of mini filters, or perhaps it’s just not something you can do without some major device driver work.
Thanks for any help/advice/pointers you can give me on this.
Cheers,
Simon.