Hi Everyone,
I am new on Windows Kernel Drivers development and I'm having an issue with a driver for a PCIe accelerator device.
When the driver is installed on a bare metal server and/or in a Windows Hyper-V VM whose device is exposed using SR-IOV, it works properly with no issues. I can do disable/enable cycles and the driver unloads and re-loads perfectly.
However, if I install the same driver in a Windows Hyper-V VM and pass-through the device to the VM using DDA, it will not load properly after a disable/enable cycle in Device Manager (got yellow bangs).
I have been debugging the issue but couldn't find anything useful so far.
I've observed the device remains in D3 state after trying to enabled it, and dumping !devnode in WinDbg says "Problem = CM_PROB_DEVICE_NOT_THERE". For some reason I could not figure out yet, the driver is not able to reload in a Hyper-V/DDA scenario.
Does anyone have any idea/suggestion on where can I look into to try to understand what is happening?
Any suggestion will be very appreciated.
Thanks in advance.
Did you meet all the requirements here: Plan for deploying devices by using Discrete Device Assignment | Microsoft Learn
From a kernel attached debugger you can dump the pci(e) device tree.
You can see if the pnplog has anything interesting (which is frequently hard to tell) "C:\Windows\INF\setupapi.dev.log"
Hi @Mark_Roddy
Yes, actually a follow that exact same link for configuring the VM and pass-through the device with DDA.
Also, I forgot to mention (actually I think I deleted part of my post by mistake) that the driver actually works in the VM with DDA. After installing the driver it works, after rebooting the VM it works too. It only fails after a disable/enable cycle in Device Manager. Also, even getting the yellow bang in the device in Device Manager, after rebooting the VM again the driver works again normally. The only scenario it fails is after a disable/enable cycle. It is a weird behavior and that is why it's been difficult to debug the issue since I need to constantly reboot the VM.
I'll look at that dump and the log you've suggested to see if I can find something helpful there. Thanks for the suggestion.