Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results

Home NTDEV

Before Posting...

Please check out the Community Guidelines in the Announcements and Administration Category.

More Info on Driver Writing and Debugging


The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.


Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/


Sysdriver: How to capture Input data and play user audio application data

DineshDinesh Member Posts: 31

Hi,

For my project I need to create virtual mic and speaker. I am Sysvad is doing it. I am very new to audio driver development. while I am using sysvad driver, I cant able to record my voice,instead generated sine wave(tonegenerate.cpp) is showing in recording device. 

Can you people please guide me, where the changes need to be taken care to capture my mic data instead of sine wave. I am going through minwavertstreams.cpp file, but I don't have any idea where to tap the mic data. Hope I get solution on this or any suggestions for any document to read. Advance Thanks.

«1345

Comments

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    What application did you use to test this? My guess is that you pulled your input from the sysvad fake microphone and played it to the sysvad fake speaker. In that case, naturally, what you're going to get is a recording of the fake microphone, which produces a sine wave. That should not have been hard to figure out. If you want to record live mic data, then your test app needs to read from a live microphone and play it to the sysvad fake speaker.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31

    Thank you for reply sir,
    I tested sysvad on audacity. I enabled "Sysavd mic" in audacity mic options as well as speakers. I want take live mic data from sysvad fake microphone and pass it to user application and I want take take user app data and post-process it before giving to sysvad fake speakers.

    My Actual Target is:
    . If i get any open source of virtual driver, I will integrate my pre-process code in virtual audio driver code at both mic side and speaker side. I got to know about sysvad which can be suitable for my task. But, I cannot able to figure out, which code block I need to change to get my task. Can you give your guidelines.It will be very useful for my project. If you want me read any document about it,please suggest. Advance thanks.

  • DineshDinesh Member Posts: 31

    Can you tell me, which buffer I have to tap to feed the user application for live mic data.

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    I want to take live mic data from sysvad fake microphone...

    But don't you see how ridiculous that statement is? SYSVAD has NOTHING to do with your live microphones. It is not involved with your real hardware in any way, neither on the mic side nor on the speaker side. It is a completely separate path. The topology is complicated, and if you don't understand it, you'll never be able to implement it.

    Let me describe a typical scenario that people commonly want. Let's say you think you have the World's Greatest echo cancellation process. You want to do that processing in a user-mode app, even though there are better ways to do it. Let's say Skype is your target.

    In that case, you would have your processing application attach to the real microphone and the real speakers, just like a normal audio application. Then, you would have Skype attach to your fake microphone and your fake speakers. You would then modify SYSVAD so that the speaker accepts audio data and stores it in a circular buffer. You would modify SYSVAD so that the microphone feeds data from another circular buffer. You would modify SYSVAD so that the processing application can supply data to the SYSVAD mic buffer, and read data from the SYSVAD speaker buffer.

    So, once things get rolling, Skype sends data to the SYSVAD fake speaker. The processing app reads that data using a back-door ioctl. The processing app also reads the live microphone data from the system's real microphone. It mixes the mic data from Skype's speaker data using your World's Greatest process. The processed speaker data is then written to the real speakers for the user to hear. The processed microphone data is written back to SYSVAD's fake microphone using a back-door ioctl. Skype then reads the microphone data from the SYSVAD buffer.

    I ought to be charging you for this, because I just did your system design for you, and it had apparently not occurred to you yet. Notice that what I described is not necessarily easy. SYSVAD is big and complicated, and it can be difficult to figure out how much you can strip out. I've done a couple of projects like this, and after starting with SYSVAD, I threw it out and went back to the MSVAD sample from Windows 7. It has everything one needs and is much easier to understand.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31

    Thank you for your explanation. By some forum discussions, the way previously I understood was,
    1. Generatesine() function's(Tone generate.cpp) buffer is sending to user application and
    2.whatever we speak in live mic is storing(Savedata.cpp) in local directory files.

    So, if I can replace the buffer of generate sine() function with microphone audio buffer(which is used for storing audio in directory - savedata.cpp), we can able to pass mic data to user application. So, I called this process as feeding live mic data to fake mic.

    I assumed like, While copying audio buffer data to generatesine() buffer,I wanted pre-process data and store in generatesine() buffer. Here, I will open fake mic in skype app.
    That is what I thought to do with sysvad.

    In that case, you would have your processing application attach to the real microphone and the real speakers, just like a normal audio application. Then, you would have Skype attach to your fake microphone and your fake speakers. You would then modify SYSVAD so that the speaker accepts audio data and stores it in a circular buffer. You would modify SYSVAD so that the microphone feeds data from another circular buffer. You would modify SYSVAD so that the processing application can supply data to the SYSVAD mic buffer, and read data from the SYSVAD speaker buffer.

    I wanted to implement the same thing you explained. May be I have conveyed my doubt in a wrong way. I am in learning stage of drivers, does the buffer "AudioBufferMdl" in CMiniportWaveRTStream::AllocateBuffer called as cyclic buffer? Is that buffer need to replace with generatesine() function's buffer so that we can pass the processed data to skype. Sorry If I understood in a wrong way

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660
    1. Generatesine() function's(Tone generate.cpp) buffer is sending to user application, and

    The generated sine wave is seen by the fake microphone consumer, like Audacity. That app does not know that the stream is not coming from a real mic.

    1. whatever we speak in live mic is storing(Savedata.cpp) in local directory files.

    This is absolutely not true. The sysvad fake speaker records whatever is being sent to it from an audio application. There is no connection at all, at the driver level, between the microphone and the speaker part of an audio driver, and there is certainly no connection between the microphone from your real hardware and the speaker part of sysvad. They are all totally independent streams.

    Now, if you happen to have an audio application, like Audacity, that is reading from a live microphone and copying that data stream out to the sysvad fake speaker, then the recording will be the live mic data. That's thanks to Audacity, not to the driver. If you had your MP3 player playing into the sysvad speakers, then sysvad would record the music.

    So, I called this process as feeding live mic data to fake mic.

    But you are glossing over the details, which are considerable. You are also using the phrase "user application" rather loosely. There are two very different applications involved here. You have one innocent application using the standard audio APIs, which has been told to use the sysvad microphone and speakers. It doesn't know there is anything funny going on. Then, you have your custom service application, which is using the live microphone and speakers, and as part of its processing is using custom APIs that you have to add to sysvad, in order to extract the speaker data from the innocent application and push filtered microphone data back in.

    does the buffer "AudioBufferMdl" in CMiniportWaveRTStream::AllocateBuffer called as cyclic buffer?

    Yes. In a typical WaveRT driver, that buffer is part of the audio hardware. The AudioEngine reads and writes directly from/to that buffer, with no driver involvement. Sysvad has to be involved in this case, because of course there is no hardware. WaveRT was designed for hardware with circular buffers, which is partly why I think it is a bad choice for a virtual device.

    Is that buffer need to replace with generatesine() function's buffer so that we can pass the processed data to skype.

    Well, you can't "replace" that buffer. That buffer is the place where the Audio Engine reads the fake microphone data and writes the fake speaker data. AudioEngine reads and writes that buffer on its own. Sysvad just updates pointers to tell the AudioEngine how much data is in the buffer (or how much room). In the sample, CMiniportWaveRTStream::WriteBytes is the spot where the GenerateSine application fills in more fake microphone data to send to Audio Engine. You need to replace that with something that copies your processed data to the buffer instead. That means, of course, that you have to have a way to get your processed data into sysvad so the driver can copy it to that buffer.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    You can (and should) draw out the design on a whiteboard. There are 7 components in this design. Skype, your processing app, and the Audio Engine live in user mode. The real speaker driver, the real mic driver, the sysvad fake speaker, and the sysvad fake mic live in kernel mode.

    The data path is lengthy. Data come in from the real microphone, goes through the Audio Engine, into your processing app It does its processing and writes via a back door into the sysvad microphone buffer. Audio Engine pulls sysvad microphone data and sends it to Skype. Skype writes its data through Audio Engine into the sysvad fake speaker. Your processing app reads from the fake speaker using a back door. It does some processing on that data, and writes it through the Audio Engine to the real speakers.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Thank you sir, I am really thanful for your detailed explanation of everything which is useful particularly in my case. I can shape my work now.
  • DineshDinesh Member Posts: 31

    I also want to try with MSVAD. Is MSVAD can be compiled with windows 10 x64 machine or it can be used on only win7 x32 machine.

    Please share ,If you have any MSVAD source(URL link), sir. I am not able to build on my system with the MSVAD source I got. My machine is windows7 X64.

  • DineshDinesh Member Posts: 31
    1. Sysvad - Is Testing on mode is compulsory to deploy driver on target computer?
    2. If not, How to test the driver without keeping test on
    3. I tried with driver signatures off while system loading time.

      Can you give me suggestion on it, sir!

  • DineshDinesh Member Posts: 31
    edited May 2020

    Please Ignore MSVAD related doubt.I could able to conclude on that :) Please ignore it

    Now, With your suggestions i can able to understand and I could able to clean the code to get only single Mic and Speaker. Previously it was showing 5 to 6 Mic and speaker pairs.
    1.I want to remove the dependency "bcdedit /set TESTSIGNING ON" while installing. So that, I can install on other pc without enabling signing off

    There is another alternative by changing some things while booting. I want to avoid those dependencies. Please share any alternatives with me.

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    MSVAD was part of the WDK until version 8.1. You should be able to findvit in Microsoft's archive:
    https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official Windows Driver Kit Sample/Windows Driver Kit (WDK) 8.1 Samples

    For systems prior to Windows 10, you have to sign the driver with a Class 3 Code-Signing Certificate, from a certificate authority that has an approved "cross certificate" For Windows 10, the driver has to be signed by Microsoft, either through WHQL or by submitting through the attestation signing service. For now, just use test mode.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Sir, I am stuck at two doubts

    As per my understanding, IOCTL function can read or write data from/to driver. So that, I can able to fill sysvad fake mic buffer from my process application through IOCTL.

    So,

    1. Will IOCTL directly know the sysvad fake mic buffer to which it has to write or read. Or else we need specify the buffer name.

    I am thinking like we need to integrate some blocks which are related to IOCTL calls(which is specified by macros) . Then only it can able to call the sysvad driver.
    Which blocks I need to integrate.Is there any examples are available.

    2.Without IOCTL, shall I directly use AudioBufferMdl buffer to fill the processed data.
    While I am going through wavert port driver, it is handling write/play buffer pointers. Will it automatically take care even if we fill our data at write pointer.

    3. Does audio engine calls that buffer every 10msec? Or how much time. So that I can fill that buffer with that duration.


    Can you please me to go ahead by explaining this doubts.

    Which one is better.
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660
    1. Your application will need to use CreateFile to open a handle to your driver. That means you need to get a file name. One of the simpler ways is to have your driver use IoRegisterDeviceInterface to register a custom interface GUID. Then you can use CM_Get_Device_Interface_List to get the file name associated with that interface. Audio devices already registry several interfaces, so you might be able to piggy-back on one of those, but it doesn't cost very much to register your own.

    I don't know what you mean by "integrate some blocks". Do you mean "copy existing code"? Ioctl handling is very, very common, so there are lots of examples. You'll need to find WDM examples, because you can't let KMDF do your dispatching. Port Class is already doing that.

    1. Without IOCTL, shall i directly use AudioBufferMdl buffer to fill the processed data.

    Without an IOCTL, how do you think you'll be doing your processing?

    Audio Engine accesses the data through the buffer that gets returned as AudioBufferMdl. ReadBytes and WriteBytes get called during the driver's simulated timer tick and when Audio Engine checks the pointer positions. Those calls update the pointers into that buffer, as if there had been real hardware.

    1. Yes, Audio Engine reads/write the buffer 10ms at a time. Your process will not be synchronized with Audio Engine, so you may have to think about how to handle it when a buffer gets full or empty.

    Which one is better.

    Which what is better?

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    edited May 2020
    Thank you sir
  • DineshDinesh Member Posts: 31
    Thank you for the explanation sir. It has given me some knowledge to proceed further.
  • DineshDinesh Member Posts: 31
    Hi sir,

    I tried to implement ioctl interface. But, whenever I added the IOCTL interface, fake mic and fake speaker are not visible in the devices list(Recording & Playback).


    My IOCTL implementation is:

    #define MTCS_IOCTL_SUPPORT 1

    #if MTCS_IOCTL_SUPPORT
    _Dispatch_type_(IRP_MJ_CREATE)
    DRIVER_DISPATCH CSMTIOCtlCreate; //function Body has implemented
    _Dispatch_type_(IRP_MJ_CLOSE)
    DRIVER_DISPATCH CSMTIOCtlClose; //function Body has implemented
    _Dispatch_type_(IRP_MJ_DEVICE_CONTROL)
    DRIVER_DISPATCH CSMTIOCtlDeviceControl; //function Body has implemented

    UNICODE_STRING referenceIOCtlString;
    UNICODE_STRING linkIOCtlString;
    PDEVICE_OBJECT m_pPhysicalIOCtldeviceObject;

    #define CSMT_IOCTL_NT_DEVICE_NAME L"\\Device\\Mtctldevice"
    // IOCTL GUID
    DEFINE_GUIDSTRUCT("05513b4d-6462-45e1-8cbd-0a43ac176b91", MTCS_IOCTL_AUDIO);
    #define MTCS_IOCTL_AUDIO DEFINE_GUIDNAMED(MTCS_IOCTL_AUDIO)

    #define IOCTL_CSMT_READ_METHOD_BUFFERED CTL_CODE(FILE_DEVICE_UNKNOWN, 0x900, METHOD_BUFFERED, FILE_ANY_ACCESS)
    #endif




    #pragma code_seg("PAGE")
    NTSTATUS
    CSMTIOCtlDeviceControl
    (
    _In_ DEVICE_OBJECT* _DeviceObject,
    _Inout_ IRP* _Irp
    )
    {
    NTSTATUS ntStatus = STATUS_SUCCESS;
    PIO_STACK_LOCATION irpStack;
    ULONG ControlCode;
    //ULONG inBufLength; // Input buffer length
    ULONG outBufLength; // Output buffer length
    PCHAR /*inBuf,*/ outBuf; // pointer to Input and output buffer
    PCHAR data = "This String is from Device Driver !!!";
    size_t datalen = strlen(data) + 1;//Length of data including null

    PAGED_CODE();
    UNREFERENCED_PARAMETER(_DeviceObject);

    ASSERT(_DeviceObject);
    ASSERT(_Irp);

    irpStack = IoGetCurrentIrpStackLocation(_Irp);
    ControlCode = irpStack->Parameters.DeviceIoControl.IoControlCode;

    switch (ControlCode) {
    case IOCTL_CSMT_READ_METHOD_BUFFERED:
    outBufLength = irpStack->Parameters.DeviceIoControl.OutputBufferLength;
    if (outBufLength != 0) {
    outBuf = (PCHAR)_Irp->AssociatedIrp.SystemBuffer;
    //
    // Write to the buffer over-writes the input buffer content
    //
    RtlCopyBytes(outBuf, data, outBufLength);

    //
    // Assign the length of the data copied to IoStatus.Information
    // of the Irp and complete the Irp.
    //
    _Irp->IoStatus.Information = (outBufLength < datalen ? outBufLength : datalen);

    //
    // When the Irp is completed the content of the SystemBuffer
    // is copied to the User output buffer and the SystemBuffer is
    // is freed.
    //
    }
    else {
    _Irp->IoStatus.Information = 0;
    }
    ntStatus = STATUS_SUCCESS;
    break;

    default:
    ntStatus = STATUS_INVALID_PARAMETER;
    _Irp->IoStatus.Information = 0;
    break;
    }

    if (ntStatus != STATUS_INVALID_PARAMETER) {
    ntStatus = STATUS_SUCCESS;
    }

    _Irp->IoStatus.Status = ntStatus;
    IoCompleteRequest(_Irp, IO_NO_INCREMENT);

    return ntStatus;
    }



    IOCTL Interface in Adddevice() function:

    RtlInitUnicodeString(&referenceIOCtlString, CSMT_IOCTL_NT_DEVICE_NAME);

    ioctlStatus = IoRegisterDeviceInterface(
    PhysicalDeviceObject,
    &MTCS_IOCTL_AUDIO,
    NULL,
    &referenceIOCtlString
    );

    if (NT_SUCCESS(ioctlStatus)) {
    DbgPrint(" Registered Device Status %d Name Length %d \n",
    ioctlStatus, referenceIOCtlString.Length);
    ioctlStatus = IoSetDeviceInterfaceState(&referenceIOCtlString, TRUE);
    if (NT_SUCCESS(ioctlStatus)) {
    DbgPrint(" Ready for Communication Status %d \n", ioctlStatus);
    }
    }

    Code is building properly and Driver package is also created properly. But, fake mic and fake speaker are not visible. Is there any line creating that problem?, sir. Or, can you suggest me to modify something so that mic & speaker will be in sound list.
  • DineshDinesh Member Posts: 31
    Sir, I am bit confusing with name conventions. I want to understand in a practical manner
    In sysvad:
    1. Adapter card - sound card
    2. Device - Virtual mic and virtual speaker
    3.Adapter driver - code which can communicate with sound card
    4.Device object - Virtual mic and virtual speaker's object

    Adddevice() function creates the function device objects for Virtual mic and virtual speaker's object so that IRP can be given to devices
    Here, Physical device and device are diiferent?
    Is my understanding correct, sir?
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    Did you change DriverEntry to install your dispatch routines in the DriverObject after PcInitializeAdapterDriver does its thing?

    You need to remember that ALL communication with streaming drivers happens through ioctls. The Audio Engine is sending your driver boatloads of ioctl requests to query your properties, negotiate your format, set the framing, and even stream the data. If you installed your own dispatcher, then you are returning an error for all of the KS ioctls.

    If you get an ioctl that you don't recognize, you need to call PcDispatchIrp so the Port Class driver can do what it would have done if you weren't there.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    Yes, the Port Class driver invents its own terminology, some of which conflicts with normal driver usage and the Kernel Streaming model upon which it is based.

    An "adapter" is a sound card. A collection of endpoints. An "adapter" is a device in the WDM and KS sense. Your fake sound card will have one adapter, which means you will have one DEVICE_OBJECT.

    A "subdevice" in Port Class is a high-level concept. You'll get one subdevice for topology, and one subdevice for WaveCyclic or WaveRT, each one represented by a COM object.. The WaveCyclic or WaveRT object then has separate streams for capture (microphone) and render (speaker), again represented by COM objects.

    The "subdevice" is a filter in the KS sense (like DirectShow). Each filter has multiple pins, which the Port Class driver maps to streams.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31

    Thank you sir. I added PcDispatchIrp into my dispatcher and invoked whenever it receives other than my IRP. Now I could see the speaker/mic in the sound card control panel list.

    Now, I am trying to connect my device from my application (to exchange some data) using CM_Get_Device_Interface_List and CreateFileW APIs. But is returned CreateFileW failed with error No 2.

    The can able to see my device in the registry list - Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\DeviceClasses{05513b4d-6462-45e1-8cbd-0a43ac176b91}##?#ROOT#MEDIA#0000#{05513b4d-6462-45e1-8cbd-0a43ac176b91}

    From CM_Get_Device_Interface_List , received DeviceName as (\?\ROOT#MEDIA#0000#{05513b4d-6462-45e1-8cbd-0a43ac176b91}) then I called
    status = GetDevicePath((LPGUID)&GUID_MTCS_DEVINTERFACE, completeDeviceName,
    sizeof(completeDeviceName) / sizeof(completeDeviceName[0]));
    if (status == FALSE)
    {
    return INVALID_HANDLE_VALUE;
    }

    Below is my code change in SYSVAD:

    In DriverEntry -> Assigned my own dispatcher for IRP_MJ_DEVICE_CONTROL

         DriverObject->MajorFunction[IRP_MJ_DEVICE_CONTROL] = MTCS_IOCtlDeviceControl;
    

    in ADD device – Registered my device with 05513b4d-6462-45e1-8cbd-0a43ac176b91

    ioctlStatus = IoRegisterDeviceInterface(
       PhysicalDeviceObject,
      &MTCS_IOCTL_AUDIO,
       NULL,
       &referenceIOCtlString
    );
    
    if (NT_SUCCESS(ioctlStatus)) 
    {
       ioctlStatus = IoSetDeviceInterfaceState(&referenceIOCtlString, TRUE);
    }
    

    Dispatcher Handler:

    irpStack = IoGetCurrentIrpStackLocation(_Irp);

    if(irpStack->MajorFunction == IRP_MJ_DEVICE_CONTROL)
    {
    ControlCode = irpStack->Parameters.DeviceIoControl.IoControlCode;
    if(ControlCode == IOCTL_CSMT_READ_METHOD_BUFFERED) {
    _Irp->IoStatus.Information = 0;
    ntStatus = STATUS_SUCCESS
    _Irp->IoStatus.Status = ntStatus;
    IoCompleteRequest(_Irp, IO_NO_INCREMENT);
    return ntStatus; // Returned if receives my IRP
    }
    }
    ntStatus = PcDispatchIrp(_DeviceObject, _Irp);

  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    You mentioned that you're calling CreateFileW. You shouldn't explicitly add the "A" or "W" suffix. That's just a bug waiting to happen. If you are building the app as Unicode, then the compiler will automatically call CM_Get_Device_Interface_ListW and CreateFileW. If you find yourself having to cast between LPSTR and LPWSTR, then you have made a mistake.

    Remember that you have to call RtlFreeUnicodeString on referenceIOCtlString after you're through with it.

    The Port Class driver seems to reject file names that it doesn't understand. I've always had to specify a reference string suffix of L"Wave" when calling IoRegisterDeviceInterface.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Sir, I have corrected my code as per your suggestion. But still CreateFile is failed with error no 2. Looks like CreateFile failure for device control object is due to IRP_MJ_CREATE & IRP_MJ_CLOSE dispatcher.



    When I add and overwrite with my own dispatcher for these two major functions then CreateFile is success for my device control object. But all the other KS ioctls are failed.



    Please suggest how to differentiate the device object in my CREATE dispatcher to identify the request is from my control device object. So that I can the complete the IRP and return an appropriate NTSTATUS . Otherwise I will continue PcDispatchIrp for handling KS ioctls
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    You don't have a control device object, assuming you did as described above. You just have a symbolic link into the primary device object. You shouldn't need to interfere with the CreateFile process. If you want to, I suppose you could use a custom reference suffix when you register your device interface, then check for that in your CreateFile handler. If you see it, return success. If not, pass it to PcDispatchIrp. However, I've never had to do that.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Yes sir. I created symbolic link into the primary device object . In my application I am trying to access that device to send/receive data using my IOCTL.



    In my application, i am using CM_Get_Device_Interface_List() to get the device name from GUID, then I am calling CreateFile with OPEN_EXISTING to open my device . But CreateFile (hDev ==-1) Is returned with error 2. Not sure what is the problem. I tried debugging but could not. Please help me sir.



    cr = CM_Get_Device_Interface_List(
    InterfaceGuid,
    NULL,
    deviceInterfaceList,
    deviceInterfaceListLength,
    CM_GET_DEVICE_INTERFACE_LIST_PRESENT);

    if (cr != CR_SUCCESS) {
    }



    hr = StringCchCopy(DevicePath, BufLen, deviceInterfaceList);

    if (FAILED(hr)) {
    }

    hDev = CreateFile(
    DevicePath,
    GENERIC_READ ,
    0,
    NULL,
    OPEN_EXISTING,
    0,
    NULL
    );
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    Did you print out the file name, to see if it looks like a hardware ID string? Showing snippets like this isn't enough. It is so easy to make character set mistakes, and we can't see that unless we can see the declarations of deviceInterfaceList and DevicePath. Where did you get BufLen? Where did you get deviceInterfaceListLength?

    You probably want GENERIC_READ|GENERIC_WRITE, and you may need to allow sharing, with FILE_SHARE_READ|FILE_SHARE_WRITE, but that won't cause ERROR_FILE_NOT_FOUND.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Thank you sir,

    I could able to communicate with driver through IOCTL. I could able to get notifications of fake speaker & Mic also, if any application uses them. My process application is ready with Real mic data

    I am stuck at:
    1.From which buffer audio engine reads the data and provide it to skype (By searching forums, we have to copy our data to generate sine buffer. Isnt it relate to DMA?, If not , can I directly copy my data to that buffer?

    2. Can I create a separate cyclic buffer to copy my real data? If so, Can you suggest me, where I have to create my cyclic Buffer so that audio engine can read from it? In which function of sysvad, we can get calls from audio engine for data filling?


    can you please give some explanation about them sir!!!
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    1.: You couldn't figure this out from the code? In CMiniportWaveRTStream::WriteBytes, it calls GenerateSine to put new microphone data into m_pDmaBuffer. If there were real hardware involved, then yes, there would be DMA going on. In this case, you are pretending to be the hardware.

    2.: That IS the cyclic buffer. That's it. You can trace its usage in the module. Because its timing is so tight, you may want your own buffer to communicate with your app, although you might be able to do both ends with one buffer.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • DineshDinesh Member Posts: 31
    Thank you for your help sir, I could able to send my audio to sysvad and can take from sysvad too.

    I have created a circular buffer with below function. Somewhere in the source said that this function will create non contiguous memory.

    CS_Frame = (BYTE*)ExAllocatePoolWithTag(
    NonPagedPoolNx,
    m_FrameSize,
    SYSVAD_POOLTAG1);


    But, it is working properly as of now. Is there any problem with this call for creating memory,sir?
  • Tim_RobertsTim_Roberts Member - All Emails Posts: 14,660

    If there is no hardware involved, then non-contiguous memory is not a concern. I tend to use "operator new", but that's just a wrapper around ExAllocatePoolWithTag.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Upcoming OSR Seminars
OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!
Kernel Debugging 16-20 October 2023 Live, Online
Developing Minifilters 13-17 November 2023 Live, Online
Internals & Software Drivers 4-8 Dec 2023 Live, Online
Writing WDF Drivers 10-14 July 2023 Live, Online