@“Peter_Viscarola_(OSR)” said:
So, the OP’s ISR takes way too long… assuming his measurements and math are correct. But:
Everything else, everything should be handled in the DPC .
is a bit too dogmatic, even for me. And not frequently “best practice” either. Consider the lowly traditional keyboard driver. We wouldn’t really want to queue a DPC for every key press, right?
Somewhere between what we at OSR call The Famous Four Steps (things you do in an ISR) and writing all your processing code in your ISR, reasonable engineers can agree a balance.
[Dogmatic mode engaged] …
Actually, I would contend that the lowly keyboard is a perfect example of why you would queue a DPC for every single keypress. Consider the design pattern that each time a chunk of data arrives from a piece of hardware (keystroke scan code, network packet, etc.) that chunk of data as presented from the hardware is independent and unique from any other chunk of data. Doesn’t matter the size of the chunk, the fact that the hardware chose to present it in the manner that it did must be respected and that includes the response to that presentation. There is no “balance” involved, the data and it’s presentation are what they are.
Suppose that I had an FPGA that did a DMA transfer of one and only one byte, generating an interrupt for each one … would it be a good idea to say “I’m going to wait here in my ISR clearing interrupts until I’ve got a nice 64bytes of data, then push a DPC”?
Go back to the keyboard … it’s sending me a keystroke one at a time, would it make sense to say “nope, I’m going to gather 64 keystrokes here in my ISR before I push it to a DPC”?
When I’ve got a serial port sending me one character at a time, should I say “well, I’m just going to wait here in my ISR until I’ve got a good 64 bytes of data before I push to DPC”?
In each of these cases we are going to be making things far, far worse by trying to “agree to a balance” of processing some things in the ISR and try to “save a DPC” (really need to put that on a T shirt).
Back in the wild west days of XP folks did try to “save a DPC” by doing everything in ISR space … and that’s why MS introduced timeout BSOD’s for ISR’s and DPC’s because certain network drivers wanted that extra bit of speed you get by not bothering with returning from the ISR. I remember well porting a Linux network device driver to Windows years ago that had really good performance numbers … because they did everything, from first interrupt to machine powerdown, inside of a never to be returned from ISR routine …
The ISR four step dance pattern works well for serial traffic, DMA transfers and keystrokes because it treats each data chunk from the hardware as the hardware presents it as unique from the other chunks … either as a multi megabyte S/G DMA buffer or one single serial or scan code byte. You can do whatever slicing and dicing you want after you’ve collected that chunk in a DPC or an APC or a system thread, but IMHO you have to respect the fact that the hardware, for whatever reason, has chosen to present the discrete chunk of data in the manner that it did and you need to respond to that presentation in the same atomic fashion even if it means dispatching a DPC for every single unique chunk it presents …
[Dogmatic mode disengaged] … now back to my TB3 woes …