I'm writing a wdm PCIE driver.My hardware is a fpga pcie board. The pcie is implemented from AXI PCIE IP produced by xilinx.Now,in my wdm driver IRP_MN_START_DEVICE handle function,windows 11 crash when I call IoGetDmaAdapter. My code follow:
DEVICE_DESCRIPTION dd;
PDMA_ADAPTER pDmaAdatper;
ULONG MaxMapRegNum = 0;
memset(&dd, 0, sizeof(DEVICE_DESCRIPTION));
dd.Version = DEVICE_DESCRIPTION_VERSION1;
dd.Dma32BitAddresses = TRUE;
dd.InterfaceType = PCIBus;
dd.Master = TRUE;
dd.MaximumLength = 32768;
dd.ScatterGather = FALSE;
//here IoGetDmaAdapter cause system to crash
pDmaAdatper = IoGetDmaAdapter(deviceExtension->PhysicalDeviceObject, &dd, &MaxMapRegNum);
if (pDmaAdatper == NULL)
{
DbgPrint("IoGetDmaAdapter Failed\n");
}
else
{
DbgPrint("IoGetDmaAdapter Success,MaxMapRegNum=%d\n", MaxMapRegNum);
}
What is the reason about the issue?Thanks very much!!!!!
You need to provide more information. Specifically the output from windbg !analyze -v after the test system bugchecks, or from a dumpfile. Also make sure that the correct symbols are loaded before running !analyze -v.
More than likely deviceExtension->PhysicalDeviceObject
is somehow invalid.
I looked back at some old WDM drivers I have and for some reason the person who wrote them allocated the DEVICE_DESCRIPTION using ExAllocatePoolWithTag. I can find no documented reason for it but might be worth trying. Also my NumberOfMapRegisters output is in the deviceExtension not a local variable.
Is there a reason you are trying to do this with WDM? The only drivers I write are FPGA drivers and the WDF code I write is drastically smaller and simpler than the WDM code that my predecessor wrote.
And demonstrably similar in terms of performance.
I worked on a DMA driver for an FPGA where speed was of paramount importance. I had the luxury of being able to experimentally write the DMA engine portion in both WDM and WDF, using every trick in my arsenal to achieve maximize performance.
To my surprise, the WDM version was not faster. The driver shipped with 100% WDF code.