Help with design decision (using sysvad virtual driver as base)

@andrefellows what is PNOISE_DATA_STRUCTURE

That is his device context structure, custom to his driver. It has all the data he needs to keep track of to do his work.

1 Like

@Tim_Roberts How can i create it.Is the structure the same as _PortClassDeviceContext. I have seen above and you call the GetDeviceContext but where is GetDeviceContext function

Do you have any driver experience at all? These are very fundamental questions. EVERY driver has a context structure that holds all of the data for each device instance. In the case of a port class driver, things are more complicated because the port (from Microsoft) and the miniport (provided by you) act as one device and share one context. Port class creates the context, but it lets you tack on extra space for your own use, in PcAddAdapterDevice.

The device context is stored in the DEVICE_OBJECT in the DeviceExtension field. In the case of a port class driver, your context starts after the port class section, and we know that part is PORT_CLASS_DEVICE_EXTENSION_SIZE bytes long. So, this line from above finds his part of the extension:

ExtensionData = (PNOISE_DATA_STRUCTURE)((PCHAR)_DeviceObject->DeviceExtension + PORT_CLASS_DEVICE_EXTENSION_SIZE);

In my port class drivers, I create a function called GetDeviceContext to do exactly that so I don’t have to type that repeatedly.

1 Like

@Tim_Roberts.I have no previous driver knowledge.Sorry for my bad English. In school i am not learning about this. I’m groping for it. As a beginner i am trying to follow the available examples to better understand this problem. Sorry for bothering you. Where can I see more examples.Is there any other way to send audio data to sysvad and write it to writebytes without using ioctl does i get an advice to use this

…to better understand this problem.

What problem? You haven’t told us anything about what goal you’re trying to achieve.

… send audio data to sysvad and write it to writebytes without using ioctl …

The link you included shows how to send data to a speaker endpoint. WriteBytes is used to manage data for the microphone endpoint. Totally separate paths. You need to think about what you have. Sysvad is a fake speaker that writes the speaker data to file, and a fake microphone that generates a sine wave. That’s what it does. To do anything else, you have to write the code to do it, and that means inventing some kind of “back door” to get data in and out.

1 Like

@Tim_Roberts I have read many of your answers to make the audio transition from user mode to application (ex: skype), I am actually copying the exact same code contained in the related questions and I don’t understand Concepts or constructs, how to implement it into code as I asked above. Can you please give me some sample projects. Thank you for the answer.

You can see from the rather good chart above that there a lot of pieces to this, and they all have to work together. It’s complicated, and it has to run in real-time. If you don’t have experience writing audio applications AND experience writing drivers, then you will never make this work. Sorry to be blunt. Even the big companies hire people to do this kind of thing.

There are no samples. Because so many people want to do this, I’ve suggested for the last 15 years that the Microsoft audio team create a much simplified version of SysVad that has external hooks to circular buffers, but so far they’ve been busy doing real work.

You need to put circular buffers in SysVad. You need to add ioctls that allow you pull data in and out. You need to write an application to do the “in and out” by calling those ioctls. You need to write a test application to take the place of Skype by reading and writing using the WASAPI APIs. You need to decide how to handle volume and mute controls. You need to figure out how much of SysVad you can delete. None of those pieces are easy.

1 Like

@Tim_Roberts Thanks sir for advice.

@Tim_Roberts can i use audio virtual cable to replace ioctl for this problem. Thanks you for answer.

Virtual Audio Cable certainly lets you route sound from Skype to another user-mode application. That doesn’t “replace ioctl”, that eliminates the entire need for a custom driver. It becomes the boxes in blue up above.

1 Like

Hi Tim, it’s me again :slight_smile:

Just a curiosity. Do you think it’s possible to implement this same solution by using a filter driver attached into a real audio driver?

My guess is no because all audio data that is sent by applications goes into MS Audio Engine through Audio APIs and, attaching a filter driver into a real audio driver we can listen some IRPs but we cannot access audio buffers.

if it’s possible, what would be the pros and cons for this alternative?

Thanks!

… this same solution …

You don’t say what solution you’re talking about. If you’re talking about routing data to and from a monitoring application, then the answer is “no”. All modern hardware audio drivers use the WinRT model, where the hardware’s circular buffer and registers are mapped directly into the Audio Engine process. The driver is not involved in streaming in any way, so there’s nothing to intercept.

I’m talking about the solution discussed in previous messages form this thread, capture audio buffers from other apps and do some processing. Anyway, I think you already answered my question, thanks!

@Tim_Roberts Hi sir, I’m using named pipe instead of ioctl and it’s seem to be working fine. Are there any disadvantage of using named pipe instead of ioctl? Thank you.