avstream audio driver and portcls audio driver

I am new to audio driver, especially to portcls audio driver. I developed an avstream virtual audio driver, and knew a little bit about avstream driver. In avstream driver, Ikscontrol interface is implemented on avstream filters and pins. clients in kernel mode can access avstream automation objects (properties, events and methods) through Ikscontrol. I just wonder if portcls has this kind of function? Thanks for your help.

Yong

The answer to that would be mostly “yes”. PortCls is pretty close to being an AvStream driver itself, actually.

There is support for all of these items. However, I recall there are some restrictions (no events on a filter, just pins, for instance). Part of that was an attempt to make KS “easier” by abstracting away parts of the interface.

The easiest place to sort this out further for yourself would be the audio class documentation on the PortCls structures, starting with PCAUTOMATION_TABLE and some of the assorted links.

Thank you very much for your information. Another question is that in AVstream, clients (user mode) can call IKsControl::ksmethod(ksevent, ksproperty) to communicate (read or write data) with the kernel mode driver, can portcls do that and how? This may be a silly question, but you answer will save me much time. Thank you.
Yong

Portcls will take the methods, properties, and events listed in the PCAUTOMATION_TABLE, adds in a few of its own, and reports them to Ks exactly as an AvStream driver would. They work similar to the way they would in an AvStream driver.

Those that portcls reports it will handle. Those that you report you have to provide the code for. Your handlers will differ, however, because PortCls has “Simplified” the interface, and presents your callbacks with less of the overall context you get in AvStream. This is all sketched out in PCMETHOD_ITEM, PCEVENT_ITEM, and PCPROPERTY_ITEM.

No, you won’t be able to override any handlers portcls provides. There’d be little reason to. For some (position property on wave ports, for instance), Portcls will require you to provide specialized miniport methods to complete its overall support of a property.

MSVAD samples should illustrate how to do this sort of thing (except for methods- I’ve never seen an audio driver use one, and some of the documentation says this is unsupported in WDM audio- that may mean they don’t work at all, or simply that there’s no way other than the direct call through IKsControl to do this- probably the latter).

The user mode code is the same (some of the HCT tests common to both, such as the position accuracy test, work this way). The driver implementations differ, but this is all documented in the WDK.

Bob, thank you very much for your reply. Your answers are very helpful. Actaully, I am looking into portcls is because I developed an AVStream audio virtual driver, but I could not find a way to register the driver into the category Audio capture sources in graphedit, the driver only shows up in WDM capture source. I searched online and someone posted that only portcls driver can register the driver in Audio catpure source. Is that true? Thank you very much.

Yong

I don’t see why that should be true. USB audio and (on those OS where it exists) 1394 audio are both AvStream drivers [although there may also have been a Streams version of USB audio- it’s been through so many iterations I can’t be certain].

But I don’t know exactly how you do it- graphedit’s well above where I typically worked. I’d suggest wdmaudiodev for that sort of question, since it doesn’t appear you’ve had success elsewhere.

My best guess is that there is a property set that isn’t being reported or shows insufficient support. But I haven’t looked at that stuff in over 2 years now.

Thanks again for your help. Here is another silly question. In avstream, filter or pin has dispatch table containing functions to be called. For audio or video capture driver, video frame or audio data is processed in the function ::process(…) routine. In portcls, as I read the sample msvad->simple, I did not find any routine to process capture data. Also in avstream, allocator framing is used to allocate the buffer for frame, and ksstream_header is the structure to save data, but where can I find the corresponding part in sample msvad or other portcls audio driver sample? Thank you very much.
Yong

You can’t control the allocator framing in portcls. Nor can you act as an IRP source (if that’s what you’re trying to do- it sounds that way, but I’m not 100% certain).

PortCls will receive streaming IRPs as a sink, even when capturing. I assume you’re talking about PCM wave data- in this case, the processing model varies by the port/miniport model chosen.

In WavePci, the port driver digests the IRP Stream into individual entries of an SGL (called “Mappings”) and feeds these on demand to the miniport, releasing them as the miniport tells them they are completed. Mappings include a VA, so they can be used in a virtual driver [they are of course meant primarily to support scatter/gather hardware using physical addresses].

In WaveCyclic, the port will periodically tell the miniport to copy data from its common buffer to or from the buffer of a streaming IRP.

In WaveRT, there is no communication. A protected User mode service periodically checks the pointer positions in the common buffer, which has been mapped into its memory space, and reads or writes the buffer directly [this is only available on Vista].

For all three, there is some form of a miniport call to get buffer position that will need a reasonable implementation for a virtual audio driver to work.

>In WaveRT, there is no communication. A protected User mode service

periodically checks the pointer positions in the common buffer, which has been
mapped into its memory space, and reads or writes the buffer directly [this
only
available on Vista].

So, this is the new thing in Vista’s audio?

I have heard that, in Vista, there is no more sysaudio.sys, wdmaud.sys,
kmixer.sys and swmidi.sys.

DirectSound’s DLL and wdmaud.drv are now proxies to some “Sound Server” server
process, not read/write/IOCTL senders as they were in pre-Vista. They use some
kind of IPC to this process, the process’s framework replaces sysaudio.sys and
wdmaud.sys, and kmixer+swmidi are now DLLs in this server process.

Server process talks to PortCls in the kernel via IOCTLs, so, the only kmode
part of the audio stack is PortCls + the miniport.

Is the above correct for Vista?


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

>> So, this is the new thing in Vista’s audio?

One of them, yes. It’s a new wave miniport type [the others are still supported].

>Is the above correct for Vista?

Almost.

There is still ExtBusAudio [combination of USbAudio and AvcAudio)], and 3rd party kernel sound driver technologies that are KS-compliant should still work. But all the mixing, legacy API support, virtual device graph building, midi emulation, et al, that used to occur in the kernel (in the software-only drivers you listed) now occurs in user mode. You won’t find any of those drivers on a Vista machine.

In addition, the user-mode service supplies per-app mixing and volume control, IIRC. So it extends the earlier capabilities as well as replacing them.

FWIW, the communication to sound drivers has always been IOCTL-based. Even the stream read and write IRPS are IOCTLs (defined as part of the KS Spec).

The basic idea is that kernel drivers are used only to drive real hardware as basic play and record devices- I’m not certain what happens with hardware mixing, etc. That wasn’t clear to me at the time I left and joined the KMDF team. For that level of detail, I’d suggest asking on thewdmaudiodev list.

As always- I should add that I haven’t been near that stuff nor tried to keep up with it for about two years.

Maxim S. Shatskih wrote:

> In WaveRT, there is no communication. A protected User mode service
> periodically checks the pointer positions in the common buffer, which has been
> mapped into its memory space, and reads or writes the buffer directly [this
>
> only available on Vista].
>

So, this is the new thing in Vista’s audio?

Virtually everything is new in Vista’s audio stack. It has almost all
been rewritten in user-mode.

There’s been a serious change in philosophy, too. Previously,
flexibility was the keyword. An OEM or driver vendor had pretty much
complete freedom to implement audio stuff in the way that made the most
sense. In Vista, that’s gone. The application is in control. If the
application didn’t ask for it, it shouldn’t exist. System-wide filters
are taboo. This includes things like acoustic echo cancellation and
noise suppression, which had formerly been implemented as system-wide
filter drivers. And if your product’s philosophy conflicts with the
Microsoft philosophy, you might as well hang it up and go into real
estate. Microsoft does not want any “value add” in the audio stack.
Just pump the bits.

I have heard that, in Vista, there is no more sysaudio.sys, wdmaud.sys,
kmixer.sys and swmidi.sys.

DirectSound’s DLL and wdmaud.drv are now proxies to some “Sound Server” server
process, not read/write/IOCTL senders as they were in pre-Vista. They use some
kind of IPC to this process, the process’s framework replaces sysaudio.sys and
wdmaud.sys, and kmixer+swmidi are now DLLs in this server process.

Server process talks to PortCls in the kernel via IOCTLs, so, the only kmode
part of the audio stack is PortCls + the miniport.

Is the above correct for Vista?

That is fundamentally correct, modulo the details. For WaveRT drivers,
there is absolutely no kernel involvement in streaming. The user-mode
audio server process updates circular buffer pointers that are mapped
directly to the hardware.

However, existing drivers continue to work, for hardware that doesn’t
fit the WaveRT model.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

>FWIW, the communication to sound drivers has always been IOCTL-based.

From what I’ve heard, the WDMAUD.DRV module (which services all apps which use
the old-style waveXxx APIs) - is a direct descendant of NT4 SoundBlaster DLL -
MMDRV.DLL.

Both use some “set format” IOCTL, and then pump the PCM data to the kernel
using the usual WriteFile. So, this is not KS yet.

The kernel “part” of this is the hardware driver in NT4 and WDMAUD.SYS in
post-NT4. WDMAUD.SYS is a client of SYSAUDIO.SYS.

User-mode DirectSound feature is also a direct client of SYSAUDIO.SYS, with I
think KS IOCTLs crossing the user/kernel boundary.

SYSAUDIO.SYS builds the virtual graph (with KMIXER and SWMIDI if necessary) and
pumps the sound to this graph.

What components of all of this are moved to user mode in Vista? Was this so in
Server 2003 where there is a Windows Audio service which is off by default?


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

> That is fundamentally correct, modulo the details. For WaveRT drivers,

there is absolutely no kernel involvement in streaming.

At least PortCls is involved to create the sound chip device, correct?


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

My point was (and it is still true) that this thread was about audio drivers at the end of the chain, and the interface to those (in *WDM* audio) has always been KS IOCTLs. I recall the other interfaces you mention also all being (mostly private and undisclosed) IOCTL-based, but to verify, I’d have to look up old source code for no other reason than to pursue an argument that really doesn’t matter to me. What’s there is close enough to work with, and if someone else wants to elaborate on it, so be it.

Every kernel-mode audio component other than the audio driver is gone in Vista.

Windows 2003 (and XP 64-bit, which is the same code base as 2003 SP1) used the same basic kernel audio architecture as Windows XP. Some server SKUs (IIRC) have no audio support (it simply does not exist)- these were at the high end (e.g., datacenter, blade). In others, the audio service is off by default (in XP 64-bit, it is on by default).

>>At least PortCls is involved to create the sound chip device, correct?

Yes. And there is still IOCTL traffic for basic management purposes- start / stop, play / record, and the like. But all of the normal streaming traffic is gone in WaveRT, making it by far the easiest of the wave miniports to implement in a driver.

(Off-topic)

I’ll try to tone it down- little time for research prior to replies recently due to some impending deadlines, making my tone a bit testy at times.

Stale, dry tech humor from my IBM days (over 2 decades ago):

“Expert (noun): A compound word, derived from a combination of ‘ex’ (has been) and ‘spurt’ (a drip under pressure)”.

Never really wanted to be one after I heard that explanation…

Maxim S. Shatskih wrote:

> That is fundamentally correct, modulo the details. For WaveRT drivers,
> there is absolutely no kernel involvement in streaming.
>

At least PortCls is involved to create the sound chip device, correct?

The driver handles configuration, setup, and power management, and then
gets completely out of the way. Once streaming starts, it happens
without the knowledge or involvement of the kernel driver.

There are a couple of good white papers on this. The WaveRT concept is
described here:
http://www.microsoft.com/whdc/device/audio/wavertport.mspx
The overall Vista audio architecture is described here:
http://www.microsoft.com/whdc/device/audio/sysfx.mspx

The Microsoft audio guys hang out on the [wdmaudiodev] mailing list.
They have been very responsive to questions, although their answers have
not always been the ones I wanted…


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.