I am converting a working DOS driver that services interrupts every 100us. I have the code functioning as a Windows driver, but it is way too slow.
I’m not currently doing anything with user mode. I am just catching an interrupt, setting up for the next interrupt, and returning TRUE (interrupt was serviced).
Timing (measured with KeQueryPerformanceCounter with 10Mhz / 100ns granularity) shows I get in and out of the ISR in 9.5us. There is no DPC. The time events are logged to an array then analyzed after 5,000 interrupts. Reading the timer itself I measured takes ~150ns, so I don’t expect this impacts my measurements.
Timing the entries to the ISR defined by IoConnectInterrupt() shows I am getting an interrupt approximately every 1.6ms rather than at the 100us rate I am expecting them. This is very consistent and the delays are not happening in bursts. I’ve tried different CPUs with very different clock rates, and the 1.6ms rate remains consistent.
My driver is sending and receiving a small packet of data by DMA. Received DMA causes the interrupt. I based the design on a WDF DMA sample. The data being transferred is all in the extension and is not using user buffers, so I removed the buffer allocation and queuing that came in the sample. The data is arriving and looks correct.
Is there something I need to do to tickle the operating system to make these interrupts arrive faster?
I’m not clear from that explanation… what TRIGGERS the interrupt? Tell us more about your device… a PCIe device, I would suppose. Tell us some more about it.
And it does these DMA transfers every 100uSecs? That’s a LOT of interrupts…. Ten thousand per second (fixed my math from my original post). That’s a higher rate that I’ve encountered. I wouldn’t count on the system being able to keep up with that.
Well the first answer is that Windows isn’t DOS and you are unlikely to be happy with the timing that you get if you actually need consistent MHz timings. In my experience, this was a bigger issue back in the XP days when many devices were being converted. Devices like piezoelectric strain gauges, and medical sensors could no longer be controlled directly by the host OS, but now had to be controlled by a micro controller that would buffer and feed data to the host OS.
it might be possible to help more, but to do that we will need to know more about what your device does and what you are expecting the driver to do with it
Thanks for the math correction. 100us is a 10Khz rate as you state (not 10Mhz as I originally said), and yes that is pretty fast.
I don’t think anything is sharing the interrupt. I tried making it exclusive as a test and nothing complained. In DOS I was able to work out the interrupt on the APIC that the PCI is routed to, and Windows seems to end up with that same number. The conventional (8259) interrupt is shared by various things, but I don’t think that matters in Windows; in DOS I needed to use an extender to get to the APIC.
Understood that Windows isn’t DOS and has things like multiple cores servicing interrupts to deal with.
It doesn’t need to be extremely consistent, but the ISR cannot be blocked for more than 100us (2 unserviced interrupts in a row is a problem).
The host system is an industrial PCI (not PCIe) processor board in a mostly passive motherboard. The latest version supports an i7 LGA1151 processor. The ones we’ve been using have 4 cores, but there are up to 12 on an i7 (I think half of which are physical cores that can service an interrupt).
I’m fairly certain I’m botching something as it is only able to interrupt every 1.6ms, which I believe (since I botched my math badly - I double checked this time) is a 625 Hz rate. My suspicion (I’m probably just displaying my ignorance here) is that it has something to do with no DPC; that the DPC completing causes ISR rescheduling as well as rescheduling for user mode tasks, but as I won’t be communicating each interrupt completion to a user there isn’t much of a point to a DPC or any object for a DPC to complete against.
Well, I can set your mind at ease: It doesn’t have anything to do with having or not having a DPC. I promise you. So rest your mind that’s not the issue.
I suspect something much more basic… like your source not actually generating interrupts at the rate you expect. Can you verify that?
We need to figure out (a) the actual rate at which the source is generating interrupts, a d then (b) the cause of the latency.
Thank you Peter. It saves some work to not play around with DPC as a black box.
I can verify the hardware works by running the DOS version that I can demonstrate is running at full speed. There is a hardware watchdog that detects missed and slow interrupts, and they aren’t even slow running on an Atom CPU. I am not seeing a difference in timing or behavior between an Atom CPU and an i3, so it does not seem like a problem with CPU capability, with the possible exception of the number of cores (4 in both processors I’ve tested, which I’m assuming means 2 physical cores.)
The mechanism is command-response so that a DMA from the CPU will enable a response DMA from the board at the next 100us period. The DMA originating from the CPU is started as part of the ISR, and once in response to IRP_MN_START_DEVICE.
Why have you created a WDM driver? That’s not a good decision.
So, hmm…. I don’t know what to suggest. I feel like I must be missing something about your device. That you* must be missing something in how you’ve got the device programmed, or connected to interrupts, or (as somebody else suggested) your device is sharing a very popular interrupt, or something.
Have you put a logic analyzer on the interrupt line to see how things look?
In your position there are several things I would try to do (some may simply not be possible) …
First, see if there’s a GPIO pin you can access from the driver on the hardware; I typically have at least one wired into a BAR register for just these kinds of problems … if you can do that then debugging timing issues becomes easier; when the interrupt is triggered have the firmware raise the GPIO, when the driver gets the interrupt it lowers the GPIO, hook up the logic analyzer and you’ve got your timing info …
Next, realize that the PLX9054 uses shared level interrupts and so might other cards in the system … maybe even at the same time [ http://www.hollistech.com/Resources/Misc%20articles/pciInterrutps.htm ]. I would check to see if there is anything else in the system that might be attempting to share that IRQL
Finally, make sure that you’re clearing the interrupt flag as soon as you validate the interrupt is yours and not disabling interrupts while you’re processing the DMA (there are other, better ways to do flow control than dropping the enable/ disable hammer)
I thought what I created was a combination of WDM and KMDF. The reason I created a WDM driver was because I went looking for sample PLX code and that was the sample Microsoft provided. The PLX sample code similarly uses WDM. I can see my needs are much simpler than what is provided in the WDM framework since I ended up removing a great deal. Simple answer: ignorance.
Now that I have most of the pieces of logic we need in place, I don’t mind restarting from a different model and porting those pieces over.
I’m fairly certain our CPU cards do have GPIO output; we haven’t been using it. I’ll look into that, but it sounds like I have more fundamental issues (like the wrong driver type for the purpose).
The PLX samples, unfortunately, are somewhat older than most of the participants on this list … it would be nice if MS deigned to modernize them, bring them into this century or even just provide another more modern sample somewhere …
There are several more modern PCIe DMA driver sample out there such as this [ https://github.com/usnistgov/RIFFA_Driver ] which will probably give a better starting point than the PLX9054 sample
It uses MSI interrupts, 64bit bus master S/G DMA and is fully KMDF …
I didn’t think MSI interrupts work on a purely PCI device; I got the impression the 9054 in particular does not support them. The DMA packets are all tiny, DMA will never go directly to user space, and so one of the things I ended up doing was removing the scatter/gather logic.
I will take a look at the RIFFA driver to see if I can make that work…
I got as far as interrupt handling and it appears the framework takes care of whether the interrupt is MSI capable and uses the older wired interrupt if MSI packets are unsupported, so that doesn’t appear to be an issue.
In a short time I can say it looks a lot like the Microsoft sample I started with that comes in the Windows DDK Samples and is found in:
Windows-driver-samples\general\PLX9x5x
I still have some work to do to make a good test of that logic; I will update later with any results I get using more purely KMDF.
The 9054 is a really old device (it dates from 1998), and it IS PCI 2.2 compliant. It includes a classic PCI bridge, and it does NOT support MSI (it uses INT A#).
Once again, OP… I’m going to suggest you put a logic analyzer on the interrupt line and take some measurements. This can’t take more than 20 minutes, and might reveal the source of your issue.
DPCs help you process more data with fewer interrupts. That’s a good thing because too many interrupts reduces performance. This difference becomes more significant the more work that the OS has to do to maintain proper context on each, and this is an area where different OSes can show different behaviour on the same hardware
Using a DPC or not should have no bearing on the latency of getting your ISR called, but doing too much work in your ISR can. What exactly are you doing in that ISR?
Also note that the CPU does not do DMA per se. Nearly every instruction it will execute will directly modify system memory, but DMA implies that your device access system memory without involving the CPU
What is the IRQ number displayed in device manager? Again, can you plug the card into other slot?
The only thing that comes to mind is some difference in interrupt routing behavior between real mode and Windows
(the BIOS knows that Windows is running, thanks to ACPI).
By the way - you don’t mention the Windows version, is it Win7 (server 2008)?
I’m not doing a lot in the (DMA read complete) ISR. I measure it at 9.5us. I copy off some telemetry and start a DMA write back to the board. I should be consuming <10% of one physical core in the ISR by those estimates, but I have no idea what other overhead that creates.
Right now I am re-implementing without WDM based on earlier suggestions. I’m assuming WDM has a high overhead.
My understanding was that DPCs get you out of the ISR sooner and operate at a lower priority so that other interrupts can perform more time critical tasks, and also to allow interface to objects that are not legal to touch during ISR.
And today:
The IRQ in device manager is 16. If I change slots the interrupt changes. In my DOS extender experiments I found PCI interrupts A-D are mapped to APIC interrupts 16-19 (as I believe is typical). I just verified this: moved the board two slots over and it changed from 16 to 18 (in Windows), as expected. In real mode it would have been mapped to IRQ 10, but the DOS extender runs in protected mode and I was able to reinvent Windows enough to use the APIC instead of the legacy 8259 PIC.
I am running this on Windows 10. It is the LTSC OEM version, which is enterprise without some of the nonsense.