>> How would that work? MIDI messages take effect in real time. If two
> applications are both sending MIDI messages simultaneously, latency for
> both is going to suffer.
Maybe/maybe not on the latency hit. I already wrote a custom (ASIO, non-native) audio driver for the same (composite USB) device that uses some aggressive techniques to achieve the the same latency with multi-client as with single client, at least as long as the CPU can handle the load.
> Are you using the midiIn and midiOut APIs to talk to the device?
Yes, I think most users interact with the device that way.
You’re right about the possibility of MIDI state getting out of sync between client applications. Our end users are typically DJs doing live performances while connected to a whole string of exotic hardware devices and running multiple pieces of software to automate their performances. Broadcasting to all listeners is exactly what we plan to do, potential havoc notwithstanding.
> Have you traced through the second open request in a
> debugger to see where the failure actually happens?
That’s a great suggestion, thanks.
> If MMSystem is doing the blocking in user
> mode, then the driver is irrelevant. In that case, you’d have to using
> something like DirectKS to talk to the driver more directly.
I doubt that’s the case, because Microsoft’s API docs describe approaches for handling multi-client MIDI usage in an (obsolete) NT4 user mode driver DLL.
> To answer your question, you could write a “port class” driver, which is
> the line you’re following above, or you could write an AVStream driver,
> which is a more “modern” architecture. USBaudio.sys is AVStream.
> However, I’m not yet convinced this will achieve your goal, assuming
> your goal is sensible.
OK. Simplicity is good for my purposes, since I only need to support a single virtual MIDI port passing messages both ways, with no software synth features. The only kicker is the multi-client requirement. I’d like to be sure of that before going down any particular path.