I have successfully built and installed the HID Injector sample from Microsoft. I see in device manager the “Virtual HID Framework (VHF) HID device” under the Human Interface Devices category.
I know that the HID Injector simulates standard windows devices (keyboard, mouse, touch panel etc.) because its HID Report descriptors specifies those devices. So sending a HID command to simulate a keyboard key stroke will be forwarded to the “Virtual HID Framework (VHF) HID device” which will in turn “inject” this command to the operating system for processing. The same as if I pressed a key on the keyboard.
The flow of data is:
Application connects to HID Injector and send HID command,
I created a new HID report descriptor that includes vendor defined Usage Page and Usages. It will not be injecting its commands into the operating system.
Is it possible to “inject” a HID report to another application instead of the operating system? I want to accomplish the following data flow:
Application 1 connects to HID Injector,
Application 2 connects to “Virtual HID Framework (VHF) HID device”,
Are you the same person as @geeklostinwoods ? Because he had the exact same insane question. There are a hundred ways to do interprocess communication. Writing a fake HID device is not one of them.
Of course it’s possible, you practically wrote a design document above. Your injector just needs to send ioctls directly to the device. The driver then stores them in a local data store, and submits them as reports when the HID application reqeusts.
If failed to mention that Application 1 & Application 2 both have the ability to send and receive HID commands. So they both need to connect to the HID Injector driver. Application 1 already uses the provided interface for the injector. I wanted Application 2 to connect to the PDO which in this case is “Virtual HID Framework (VHF) HID device”.
presumably you don’t control anything about either application and simply want the ‘glue’ them together via the input stack?
also, presumably, neither application1 nor application2 rely on Windows standard input processes for keyboard or mouse input but directly open devices?
yes, the question is how they receive input. do they simply process normal input via GDI and the message loop? or do they do less common things like opening devices directly? If you don’t know, than almost certainly they process input normally - which means that you can control application 1 from application 2 entirely in UM with no driver at all by simply calling SendInput. if you don’t control the source for either of them, then use a low level keyboard hook or the detours library to ‘capture’ the input you want to transmit from one to the other.
all of this can be done in UM and all has enforced security boundaries within windows