Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results

Before Posting...
Please check out the Community Guidelines in the Announcements and Administration Category.

Design help: Virtual Audio Device to transfer WAVE to user space memory

ntnewbientnewbie Member Posts: 1

Hi! Let me begin this by saying that this is my first foray into NT drivers and kernel land. Please forgive any ignorance I may present in the following.

The problem to solve is the following:
I want to connect a Windows application to a Windows-native (but fake) Audio Device. From its perspective, it should look like it is connecting to any plain old set of speakers & microphone. This application can not be modified for this purpose.
In a different application in user-space (the "bridge" app) I want to copy data to the microphone WAVE and copy data from the speaker WAVE used by that application. In other words, the bridge (in conjunction with a VAD driver) is supposed to mock a real audio device to play data from and record data to user space. This is supposed to happen at regular intervals, dictated by the bridge (i.e. it it supposed to be the clock source).

Now all my research and tinkering has led me to the conclusion that this will most likely require a KMDF driver. I have looked at the official Microsoft SYSVAD example and tried to implement the data exchange with IOCTL, sent from the bridge.
Is going with WaveRT the right choice here? It does DMA transfers with the ToneGenerator and SaveData, but I can not figure out how to trigger these DMAs with IOCTL to buffers in user-space. Is that even possible? Do I register IoDeviceControl in the WaveRTStream and then RtlCopyMemory between the kernel and user buffers?

Any guide would be very welcome.
Best,
Fabian

Comments

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 12,914
    via Email
    ntnewbie wrote:
    > I want to connect a Windows application to a Windows-native (but fake) Audio Device. From its perspective, it should look like it is connecting to any plain old set of speakers & microphone. This application can not be modified for this purpose.

    This is such a common need, it would behoove Microsoft to create a
    stripped-down version of SysVad that just implements this.  The problem
    with the SysVad sample is that it demonstrates virtually every feature
    that is available to an audio driver developer.  It has become a
    demonstration showroom for new gadgets.  As a result, it is enormous
    (50,000 lines of code).  Your task needs about 15% of the code in
    sysvad, and it is very difficult to figure out how to extract the pieces
    that are necessary.  I just completed two very similar projects for
    clients.  In both cases, I went back to the old MSVAD sample in the
    Vista DDK (20,000 lines of code), and I still removed about 2/3 of that.


    > Now all my research and tinkering has led me to the conclusion that this will most likely require a KMDF driver.

    It depends.  If you want this to be seen as a system audio device,
    usable by arbitrary applications, then it needs to be a kernel driver,
    although not necessarily KMDF.  The most recent versions of sysvad do
    use KMDF in miniport mode, but only in a very limited way.


    > Is going with WaveRT the right choice here? It does DMA transfers with the ToneGenerator and SaveData, but I can not figure out how to trigger these DMAs with IOCTL to buffers in user-space. Is that even possible? Do I register IoDeviceControl in the WaveRTStream and then RtlCopyMemory between the kernel and user buffers?

    You don't get to trigger the simulated DMA.  You aren't in charge here. 
    The Audio Engine asks the driver for data in its own time, on its own
    schedule, in a real-time thread.  You have to respond. You can't go
    communicate with the user-mode thread.  What that means is that you must
    implement a couple of circular buffers, one for the fake microphone, one
    for the fake speaker.  The Audio Engine fills the fake speaker buffer,
    and your app reads it out with an ReadFile or an ioctl.  Your app fills
    the fake microphone buffer with WriteFile or an ioctl, and the Audio
    Engine reads it out.

    The wave model doesn't really matter.  SysVad uses WaveRT, MSVAD uses
    WaveCyclic.  Without hardware, both get down to essentially the same low
    level calls.

    For what it's worth, the cool audio driver guys (including a few very
    helpful members of the Microsoft audio team) all hang out on the
    [wdmaudiodev] mailing list, at https://www.freelists.org/list/wdmaudiodev .

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Upcoming OSR Seminars
Writing WDF Drivers 25 Feb 2019 OSR Seminar Space
Developing Minifilters 8 April 2019 OSR Seminar Space