Your reasoning is spurious on a couple of levels. First, the issue is
whether or not you can read every single byte. So it’s not necessary to
flood the system in order to create an error, just to cause a delay of
70us. You could have problems at much lower interrupt rates, as long as
you have that 70us delay.
Second, processor instructions aren’t really the issue any more. As you
point out, the processor has cycles to burn. Even so, the PCI spec
allows stalls of up to 30us on the PCI bus. If there is heavy busmaster
traffic going on while the UART is interrupting, it’s possible that each
read from any PCI device’s registers could take that 30us. This means
that it’s possible for any ISR to take upwards of 60us. If there’s even
one higher-priority interrupt active in the system, you could blow your
70us deadline.
Now bring processor power management into the question. Your processor
may be in one of the ACPI-defined low power states when the UART
interrupts. If it’s in C1, that’s only 4 or 5us of extra latency. If
it’s in C2, it’s somewhere between 10us and 60us, depending on the
chipset. If it’s in C3, which really saves power, you’re looking at
somewhere between 70us and 800us, depending on the processor and the
chipset.
Now bring BIOS into the question. Your BIOS, depending on how well it
was designed, will trigger at least a few System Management Interrupts.
The entry and exit code for these is usually uncached, in Real-Big Mode.
Thus just the entry and exit paths can take about 1500us. Anything they
actually do in response to the SMI is extra, running uncached code.
(There are a few very recent BIOSes that run cached, protected mode.
But they are the exception.) Some BIOSes will even go upwards of
33000us in an SMI. Dell will even take over the screen and show the
user a menu.
Jake Oshins
Windows Kernel Group Interrupt Guy
This posting is provided “AS IS” with no warranties, and confers no
rights.
-----Original Message-----
Subject: RE: interrupt handshaking - was other crap
From: “Moreira, Alberto”
Date: Mon, 16 Dec 2002 09:25:58 -0500
X-Message-Number: 12
So, let’s see. 115,200 bits per second is 14,400 8-bit characters per
second. If one interrupt per character will flood the system, the peak
interrupt rate of the system is 14,400 interrupts per second. That gives
us
around 70 microseconds per interrupt. At one gigahertz, that is, 10^9/
10^6
= 1000 instructions per microsecond, so, that’s around 70,000
instructions.
Even if we use a fraction of the CPU power to handle the interrupt, say,
7,000 instructions, that’s a lot of instructions to handle one
character, no
? I’d expect a bit more from a modern processor.
Alberto.
-----Original Message-----
From: Jake Oshins [mailto:xxxxx@windows.microsoft.com]
Sent: Saturday, December 14, 2002 2:29 PM
To: NT Developers Interest List
Subject: [ntdev] RE: interrupt handshaking - was other crap
I don’t completely agree. The serial port FIFO example given below is
spurious. The problem with a serial port with no FIFO is that the data
may be overwritten so quickly by new incoming data that you would need
fantastically small interrupt latency to handle it at 115200 BAUD.
With level-triggered devices (see my earlier messages) you end up doing
exactly the sort of “queuing” that you’re talking about just by holding
your interrupt in the asserted state. The OS will call your ISR
repeatedly until you finally release the signal. (In PCI devices, this
is the INTx# signal.)
With edge-triggered devices, your ISR must be able to handle every event
that your device currently has pending, since it may only be called
once. This may or may not involve internal queuing. It may just
involve setting individual status bits for each class of event.
Jake Oshins
Windows Kernel Group Interrupt Guy
This posting is provided “AS IS” with no warranties, and confers no
rights.
-----Original Message-----
Subject: Re: interrupt handshaking - was other crap
From: “Moreira, Alberto”
Date: Fri, 13 Dec 2002 11:34:23 -0500