>1. Is it possible to write a Virtual Storport Miniport driver in KMDF?
It’s not entirely clear from the KMDF documentation.
No, you might be able to use a few of the KMDF utility objects but there is limited value in doing so. Storport request queueing and KMDF queueing are different universes.
- How can I get my miniport driver to be notified when a physical disk is connected or
disconnected? (Is it PnpRegisterPlugPlayNotification for GUID_DEVINTERFACE_DISK? or something else?)
There have been a number of devices that used virtual bus drivers, implemented in KMDF of not, to attach to some unusual physical/logical device, which then exposes some virtual storage adapter devices, which get instances of a storport miniport attached. These storport miniport instances may enumerate only a single disk, as the multi-instancing has already been handled by the lower virtual bus. The add/remove of physical devices is managed by the virtual bus driver. The virtual storport driver will then often have some private call based interface down to the virtual bus driver. In designs like this, the storport miniport acts more of an interface translator converting SRBs from the attached disk PDO into some request the virtual bus driver uses.
A storage virtual bus driver is also sometimes root enumerated (like the iSCSI driver) and talks to other physical devices. The PnP interface notifications can be used to detect device changes. A problem of root enumerated storage drivers is it’s problematic to support things like crash dumps. You also have to use care in power relationships, so on system shutdown, your physical devices don’t get powered down before you storage flushes cached data. Devices in a normal PnP hierarchy almost automagically have an appropriate power management hierarchy. I’m reluctant to even bring up root enumerated storage devices, because in past projects, we ultimately had to move everything to a correct PnP device hierarchy to get everything to work exactly correctly.
- Is it possible to just expose a RAID volume rather than a virtual disk? How do I do that?
I believe one of the sample drivers exposes a ramdisk volume device.
A big problem with many of these unusual storage driver architectures is they can’t pass Microsoft driver certification, which is essential if you want to sell a commercial product. One reason the storage stack is deeply tied to storport is the crash dump/hibernate support runs the storport miniport in a fake environment. Technically, you could just create a KDMF bus driver on your hardware that exposed disk PDOs (which know how to process SRBs), and let the disk FDOs attach, but there is no way to pass certification tests with this kind of architecture.
Another problem is much of the management infrastructure wants the storage stack to look a certain way, like storage spaces layered on storport/nvme physical devices. If you make your own storage stack architecture, this management infrastructure has no idea how to control it, and your product will integrate poorly with management layers like System Center Virtual Machine Manager.
I’m not optimistic the deep certification dependence on storport is likely to change in the near future. Measurements also suggest storport has some significant performance bottlenecks. It would be better to not expose a compound/raid disk from the top edge of storport, because requests for a disk are forced though a queue which performs poorly on multi-socket servers.
Jan