Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results

Home NTDEV
Before Posting...
Please check out the Community Guidelines in the Announcements and Administration Category.

More Info on Driver Writing and Debugging


The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.


Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/


Help with design decision (using sysvad virtual driver as base)

oldschool69oldschool69 Member Posts: 5

Hi all,

I'm another developer with no previous experience in driver development asking
for some help :)

I landed in a new project where the main requirement is build a noise removal effect
that will be available system wide, for all applications that have access to soud input/output
devices lile Skype, Slack, MS Teams etc.

As Windows audio driver development is a broad field, I'm really lost so I started reading documentation and ended up in
the sysvad sample driver.

Looking more carefully in the sample code altoghether with documentation, I could not connect the points
about how to process the buffers from virtual devices and send the processed buffer to a real audio adapater.

After reading some threads in this forum, I found precious information that gave me a direction to start some
high level designing.

I'd like to share this design and, if possible, have some guidance
from you.

This can be a feasible approach for the solution i'm looking for?

There's any better or easier way(s) to approach this problem?

Any information from you to put some light on this would be great!

Thanks


Comments

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,498

    Yes, that's a feasible approach. Several companies are already doing this. The hard work, of course, is creating the noise reduction algorithm. If you don't already have that, then you're really too far behind to be competitive.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • oldschool69oldschool69 Member Posts: 5
    edited July 29

    @Tim_Roberts said:
    Yes, that's a feasible approach. Several companies are already doing this. The hard work, of course, is creating the noise reduction algorithm. If you don't already have that, then you're really too far behind to be competitive.

    Hi Tim, thanks for your help!

    About the processing app running in user-mode, I'm thinking to use Core Audio APIs to communicate with real audio devices:

    https://docs.microsoft.com/en-us/windows/win32/CoreAudio/core-audio-apis-in-windows-vista

    and, to communicate with Virtual Driver circle buffers as you mentioned in other posts, use
    IOCTLs, CreateFile mechanism, something like this:

    https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/sending-commands-from-userland-to-your-kernel-driver-using-ioctl

    Is it possible to use a higher level API like Core Audio APIs, to access the the circle buffers as well?

    Thanks

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,498

    You can use whatever API you want to talk to the real audio devices. The Core Audio APIs are pretty easy to use.

    Is it possible to use a higher level API like Core Audio APIs, to access the the circle buffers as well?

    Nope. The Audio Engine has no idea there is a back door. Your driver has to simulate hardware circular buffers to satisfy the WaveRT interface, but you'll need your own tracking for the back door.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • oldschool69oldschool69 Member Posts: 5

    got it. thanks for your help Tim! Let's get hands dirty :)

  • oldschool69oldschool69 Member Posts: 5

    Hi Tim, after some work I had a little progress about
    communication between user-mode app and virtual driver

    I'was possible to get the sub-device handle calling CreateFile.

    I also created a dispatcher function in the virtual driver
    to handle this class of IRPs:

    DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = NREngineHandler;



    My first naive approach was, send a custom IOCTL from user-mode app to virtual
    driver and copy a chunk of data from the cyclic buffer represented by m_pDmaBuffer
    to a buffer allocated from user-mode app and then save it in a wav file:

    bRc = DeviceIoControl(hDevice, (DWORD)IOCTL_NRIOCTL_METHOD_IN_BUFFERED, NULL, 0, &outputBuffer, outputBufferSize, &bytesReturned, NULL );



    Of course, it does not work :)

    I read in other threads from this forum that some additional cyclic buffers need to be created to hold copies
    from the original ones and then, send IOCTLs from user-mode app to copy from these auxiliary buffers
    instead of the original ones.

    And also, the buffers copied along with notifications need to be enqueued in a mechanism like Inverted call model
    as describe here:

    https://www.osr.com/nt-insider/2013-issue1/inverted-call-model-kmdf/

    to notify the application that buffers were filled by audio engine and are "ready" to be read or
    written to/from user-mode application.

    I was thinking to create a copy of cyclic buffers and then read/write then from:

    CMiniportWaveRTStream::ReadBytes and CMiniportWaveRTStream::WriteBytes functions



    but I'm not sure if it's the right way.

    If you could provide some more details about this communication mechanism
    would help me a lot,

    Thanks

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,498

    Of course, it did not work

    Why not? What did it do?

    In the WriteBytes call, you have to push data into the WaveRT buffer that the audio engine can pull out later. That data has to come from somewhere. Since that buffer really "belongs" to the Audio Engine, you'll probably need your own. Similar, in the ReadBytes call, you are told that the Audio Engine has shoved data into the WaveRT buffer that need to be consumed. Again, you'll need someplace to put that data before the Audio Engine writes over it later.

    Inverted call is fine, but you need to remember that this all happens in real time. You can't hold things up waiting for response, in either direction. The Audio Engine assumes there is hardware at the other end of that buffer, hardware that is producing and consuming at a constant rate. Your app will need to keep up the circular buffers at a relatively constant level. You'll be supplying data just in time, and you'll need to pull the data out almost as soon as it gets there.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • oldschool69oldschool69 Member Posts: 5

    Why not? What did it do?

    No sure how I can access m_pDmaBuffer buffer from my dispatcher so I tried this approach:

    stream = static_cast<CMiniportWaveRTStream*>(_DeviceObject->DeviceExtension);
    if (stream != NULL) {
    DPF(D_TERSE, ("***waveRT stream address %p", stream));
    buffer = stream->m_pDmaBuffer;
    if (buffer != NULL) {
    DPF(D_TERSE, ("***buffer address %p", buffer));
    RtlCopyBytes(outputBuffer, buffer, outputBufferLength);
    _Irp->IoStatus.Information = outputBufferLength;
    }
    }

    But the pointer address I'm getting from dispatcher is not the same as from one I'm getting from ReadBytes function.

    Anyway, I copied it into the buffer from my user-mode app and from there, saved it to a .wav file.

    As I have no experience working with audio, I tried to play the saved file using VLC or Windows media player but it says
    it's a invalid audio file, I suspect that I need to perform some encoding before saved it to file

    But I'm think encoding is not relevant when I'll pass the data directly to live speakers by using WASAPI, is that correct?

    I'm trying some approaches in a trying and error fashion and learning something in the process :)

    Thanks for your help!

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,498

    I'm not sure this is really a project for a beginner. There are a lot of things to know, and you do just seem to be hacking around.

    By default, the device contexts belongs to the Port Class driver that wraps yours. Its contents are not knowable. You CERTAINLY cannot just assume that it happens to point to one of your streams -- it doesn't. Remember, the device context is global to the entire adapter. It has to manage filters and their pins and streams. You have to be very, very careful to think about what object you are working with, and what information it knows. The streams are the lowest level; they can find their parent filter, and the parent adapter object, but the reverse is not true -- you can't go deeper into the hierarchy.

    If you want your own device context section, which you certainly do, then you need to tell port class to add some extra. You do that as the last parameter in the call to PcAddAdapterDevice. The port class's context is PORT_CLASS_DEVICE_EXTENSION_SIZE bytes long, so you'll pass PORT_CLASS_DEVICE_EXTENSION_SIZE+sizeof(DEVICE_CONTEXT), for whatever your context is.

    Then, you'll probably want a function called GetDeviceContext that takes a device object and returns to you the part of the device context that belongs to you: (DEVICE_CONTEXT*)((PUCHAR)DeviceObject->DeviceExtension + PORT_CLASS_DEVICE_EXTENSION_SIZE).

    Your dispatcher cannot access the DMA buffer directly. Your dispatcher is a global which can get access to the adapter through the device context. At that point, you don't know which filter, which pin, or which stream you're talking to. Remember, your driver has multiple streams: at least one going in and one going out. You will need to set up your private circular buffers in the IAdapterCommon object, and remember a pointer to that in your device context. ReadBytes and WriteBytes are part of the stream objects. They can also get to the adapter object, which is your common hookup point. So, those functions will have to copy to/from the DMA buffer into your private circular buffers in the adapter object. Your dispatcher can then pull from the private circular buffers (again through the adapter object) and copy from/to your client.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,498

    Also remember that you will need one private circular buffer in each direction. The stream will need to know which direction it is going, and it's easy to get those confused. You need to think about "am I speaker/renderer here, or am I microphone/capture here?" Each stream only worries about one of them, but your dispatcher will have access to both. You'll also probably need a chart to remind you whether you a reading from or writing to the buffer. ReadBytes, for example, is called in the speaker/renderer path. It reads from the DMA buffer, and writes to the speaker circular buffer. Your corresponding ReadFile dispatcher, then, needs to read from the speaker/renderer circular buffer.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Upcoming OSR Seminars
OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!
Kernel Debugging 30 Mar 2020 OSR Seminar Space
Developing Minifilters 15 Jun 2020 LIVE ONLINE
Writing WDF Drivers 22 June 2020 LIVE ONLINE
Internals & Software Drivers 28 Sept 2020 Dulles, VA