Re: Maximum interrupts per second

xxxxx@storagecraft.com said:

Knowing the fact that DPC latency is really very bad on NT, this
difference can result in much better overall interrupt handling
latencies
(the whole path including even the umode code maybe) in
Linux.

When I made my measurements, I cleared the interrupt request in the ISR
in both NT and Linux (not the DPC) and measured the width of the INTA#
on the PCI bus. (The interrupt was not shared.) The device asserted the
hardware interrupt, and the first thing the ISR did was clear the interrupt
bit to turn of the IRQ. So I never even brought DPCs into the picture for
that measurement (though they were still part of the path to awakening a
thread).

And I can tell you that Linux was an order of magnitude faster at getting
the driver ISR started. 6uS for Linux 2.0 vs/ 60uS for NT 4.0sp3

So is that what you are asking?

Steve Williams “The woods are lovely, dark and deep.
xxxxx@icarus.com But I have promises to keep,
xxxxx@picturel.com and lines to code before I sleep,
http://www.picturel.com And lines to code before I sleep.”

>> Try frigging the IDT directly. I don’t know how much a PC can handle, but

> Linux is reported to be way faster than NT as far as interrupt latency is

I have some doubts in it.

Me too. The numbers cited 6us vs 60us on identical dual boot hardware are
silly. One has to wonder exactly what NT could be doing between the
interrupt assertion and the isr that would account for an order of magnitude
difference. Without some explanation I have to consider this to be a
measuring error rather than anything real.

Mark Roddy
Windows 2000/NT Consultant
Hollis Technology Solutions
www.hollistech.com

>The numbers cited 6us vs 60us on identical dual boot hardware are

silly. One has to wonder exactly what NT could be doing between the
interrupt assertion and the isr that would account for an order of magnitude
difference.

Seems like there could indeed be many things that could account for a
difference. I’m not an expert on the fine details of the Linux kernel, but
some possibilities are:

  1. NT uses it’s IRQL based priority system, instead of the native hardware
    interrupt controller’s priority management, this suggests NT has to
    reprogram the interrupt controller on every interrupt, Linux may just have
    to issue an EOI to the controller when done

  2. NT’s code working set for the interrupt path may be much larger, causing
    much more cache thrashing, NT general memory footprint is certainly much
    larger (my Linux server runs fine on a 32 MByte system, NT on a 32 MByte
    system is very slow)

  3. Linux was originally written for an x86 processor, so the architecture
    of it’s interrupt handling may be a much closer match to the hardware than
    NT (other factors that in #1)

I remember back in about 1984, I could get three serial ports to loop data
through the send/receive path at 9600 baud, with about 50% utilization of a
4.7 Mhz 8088. This must have been just under 6000 interrupts/sec. If things
scaled directly with processor performance (they didn’t) a modern 700 Mhz
Pentium III should handle numbers like 6 million interrupts/sec (assuming a
P-III 700 is 1000x faster) , at 50% cpu load.

What’s always struck me is why the code in an OS designed for 100 million
people is less efficient than code I wrote 15 years ago for a customer base
of a few thousand. Let’s see, say that inefficient interrupt code wastes
0.1% of a modern $200 CPU. That’s $200 * 0.001 = $0.20 per processor over
it’s lifetime. Spread across 100 million processors * $0.20, we get as much
as $20 million dollars wasted, because of sloppy OS interrupt code. The
only saving factor is PC’s waste so much CPU time anyway idling, it’s lost
in the noise. If our OS’s were much more cleaver about treating cpu time as
a valuable network resource, that $20 million would be a lot more
important. I think it’s an important philosophical change that’s occurred
in writing software, you should trade development efficiency for raw
performance. For low volume uses I agree, for high volumes I don’t (and 100
million is high volume in my book). Some argue this is exactly why Linux is
popular, because it makes more efficent use of your hardware.

  • Jan