xxxxx@gmail.com wrote:
I’d like to be able to do the following:
- Block selective applications from accessing the microphone.
- If not possible, block all applications from using the microphone.
- If not possible, notify the user when the microphone is being used.
From the information that I found, modern Windows versions use a memory-mapped buffer to transfer the microphone audio stream. That made me think that I might be able to control which apps are able to map the relevant buffer and access it, which will allow to implement 1. For this to work, I need to be able to identify the relevant handle, which I don’t know how to do.
No, it’s not that easy. The key problem in your scheme is the magical
protected Audio Engine process. In the post-Vista world, audio drivers
communicate only with the Audio Engine. The Audio Engine then
implements the rest of the audio graph, and applications communicate
with the Audio Engine.
Thus, in a WaveRT driver, the hardware’s circular buffers are always
mapped into the Audio Engine process. The apps do not map that buffer.
Â
Another possibility that I thought of is hooking the act of writing data to the said buffer by the device driver. If I can do that, I can e.g. replace the data with silence, and by this implement 2. But unfortunately, I don’t know how to approach it just yet.
Another dead end. In a WaveRT environment, the driver is not involved
in the streaming data flow in any way. That’s the huge advantage of
WaveRT. The driver maps the hardware’s circular buffer into the Audio
Engine, but after that, the hardware writes directly into that circular
buffer, and the user-mode Audio Engine reads from that buffer. There is
no kernel involvement in streaming.
Can you please shed some light on what possibilities I have?
What you’re asking is contradictory to the Microsoft audio philosophy,
and that’s never a happy road to travel. There are good reasons for
this, based on the history. Originally, audio apps connected directly
to the kernel driver stack, which included not only the audio driver,
but the system audio processing, the kernel mixer, etc. Then, some
engineer got the bright idea, “hey, I can ‘add value’ to the audio stack
by including my audio processing, which I know how to do better than
everybody else”. Pretty soon, there were dozens of companies trying to
“add value” by inserting their clever filters into the audio stack, and
the professional audio companies started to complain that their
latencies were totally unpredictable from system to system.
That (and DRM) is primarily what drove the audio system redesign in
Vista. In the new design, Microsoft is very strict about adding value.Â
It is impossible to “add value” generically. You “add value” by
inserting system Audio Processing Objects, which are user-mode DLLs that
live in the Audio Engine. However, APOs are installed via the INF for
hardware, and are associated with a single piece of hardware. You can’t
add them globally. Plus, an application can always request “exclusive
mode”, which bypasses any APOs.
Given all of that, what you ask may not be possible. I would suggest
you ask your question on the [wdmaudiodev] mailing list
(http://wdmaudiodev.com). All the cool audio kids hang out there,
including a couple of very helpful members of the Microsoft audio team.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.