Interrupt latency (again!)

Hi everyone,

First some context: we must use Windows 2000/XP in our embedded systems
(running on dual Xeon systems, with ACPI enabled and latest Intel chipset).
We know that W2K/XP is not a real-time OS and we can live with it - however,
we do require a reasonable and deterministic PCI interrupt latency. Hoping
to achieve this, we changed our PCI device’s interrupt vector to the highest
(0xff) by reprogramming the APIC and changing the IDT accordingly.

Then we measured our latency as follows::

{ disable interrupts
store TIME1
write to a register in our hardware (PCI) to trigger the interrupt
enable interrupts
}
PCI Interrupt –>IDT–>store TIME2 —> Windows interrupt preamble

(TIME2 - TIME1) should then give us a pretty pure interrupt latency, from
the time the I/O APIC pin is set and the time the CPU core is interrupted by
the local APIC.

Result: we get occasional 150 uSec glitches (approx. once / 24 hours) that
we still can’t explain. We know that no other interrupt or exception
occurred between the trigger and the interrupt. That said, here are some
questions:

1)Is there a way to determine if another driver or Windows has disabled the
interrupts or elevated the priority in the local APIC TPR?
2) Could it be that we are pre-empted by an SMI? Is there a way to determine
that an SMI has occurred?
3) We tried to disable the SMI on our machine by resetting the GBL_SMI_EN
bit in the SMI_EN register of a ICH3 South bridge. After doing that, we
observed awful glitches of 16 mSecs. Any idea why?

Thanks,

Patrick Laniel, Group Leader
CAE Inc.

So you said two things that I think are interesting.

“We know that W2K/XP is not a real-time OS and we can live with it”
“however, we do require … deterministic PCI interrupt latency”

These two statements are directly contradictory. You’ll never be happy.
With that said, you asked a few other questions.

  1. No, short of running the OS in a virtual machine and virtualizing the
    EFLAGS register and the APIC TPR.
  2. Yes, it’s possible that the source of your delay is an SMI. In fact,
    it’s probable. But there’s no way to prove it without hooking up some sort
    of ICE or ITP. I’m not sure if SoftICE could help you, but it seems like it
    probably could.
  3. Disabling SMI will probably make your machine too unstable to use.
    Chipset vendors use SMIs to cover up errata. If you stop it from working,
    the cockroaches will start crawling out of the woodwork. I can’t directly
    explain what you observed, except to say that almost any behavior can happen
    when you violate the manufacturer’s correctness guarantees.

I’ll skip the sermon on why you shouldn’t have moved your vector to 0xFF
today, since you have bigger issues.


Jake Oshins
Windows Kernel Group

This posting is provided “AS IS” with no warranties, and confers no rights.

“Patrick Laniel” wrote in message news:xxxxx@ntdev…
> Hi everyone,
>
> First some context: we must use Windows 2000/XP in our embedded systems
> (running on dual Xeon systems, with ACPI enabled and latest Intel
> chipset).
> We know that W2K/XP is not a real-time OS and we can live with it -
> however,
> we do require a reasonable and deterministic PCI interrupt latency.
> Hoping
> to achieve this, we changed our PCI device’s interrupt vector to the
> highest
> (0xff) by reprogramming the APIC and changing the IDT accordingly.
>
> Then we measured our latency as follows::
>
> { disable interrupts
> store TIME1
> write to a register in our hardware (PCI) to trigger the interrupt
> enable interrupts
> }
> PCI Interrupt –>IDT–>store TIME2 —> Windows interrupt preamble
>
> (TIME2 - TIME1) should then give us a pretty pure interrupt latency, from
> the time the I/O APIC pin is set and the time the CPU core is interrupted
> by
> the local APIC.
>
> Result: we get occasional 150 uSec glitches (approx. once / 24 hours) that
> we still can’t explain. We know that no other interrupt or exception
> occurred between the trigger and the interrupt. That said, here are some
> questions:
>
> 1)Is there a way to determine if another driver or Windows has disabled
> the
> interrupts or elevated the priority in the local APIC TPR?
> 2) Could it be that we are pre-empted by an SMI? Is there a way to
> determine
> that an SMI has occurred?
> 3) We tried to disable the SMI on our machine by resetting the GBL_SMI_EN
> bit in the SMI_EN register of a ICH3 South bridge. After doing that, we
> observed awful glitches of 16 mSecs. Any idea why?
>
>
> Thanks,
>
> Patrick Laniel, Group Leader
> CAE Inc.
>
>

> From: Patrick Laniel [mailto:xxxxx@cae.com]

Sent: Tuesday, May 11, 2004 4:32 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] Interrupt latency (again!)
Importance: Low

Hi everyone,

First some context: we must use Windows 2000/XP in our
embedded systems
(running on dual Xeon systems, with ACPI enabled and latest
Intel chipset).
We know that W2K/XP is not a real-time OS and we can live
with it - however,
we do require a reasonable and deterministic PCI interrupt
latency.

Can you try another chipset?

Hoping
to achieve this, we changed our PCI device’s interrupt vector
to the highest
(0xff) by reprogramming the APIC and changing the IDT accordingly.

Don’t do that, it will not help you.

  1. Could it be that we are pre-empted by an SMI? Is there a
    way to determine
    that an SMI has occurred?

The only possible way to detect an SMI is to read chipset registers
and /or modify BIOS. For example you can open SMM memory, smash
the original BIOS SMM code and then run your test. If the system hangs or
reboots then an SMI was generated.

  1. We tried to disable the SMI on our machine by resetting
    the GBL_SMI_EN
    bit in the SMI_EN register of a ICH3 South bridge. After
    doing that, we
    observed awful glitches of 16 mSecs. Any idea why?

BIOS SMM code is now almost a inseparable part of current chipsets.

I would recommend trying another chipset and/or licensing the BIOS code
from
Phoenix/AMI/General Software and customizing SMM code for your real-time
application.

Also see my old response in comp.realtime:
http://groups.google.com/groups?q=smm+latency+budko&ie=UTF-8&oe=UTF-8&hl=en&b
tnG=Google+Search

Dmitriy Budko, VMware

Hi Patrick
i wanted to know something on reprogramming the apic.
Can i reprogram the apic in windows 2000/XP to generate
clock interrpts at 1ms interval or any configurablke interval.
How can i do that ??
Thanks in advance
regds
Mayank

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of Patrick Laniel
Sent: Tuesday, May 11, 2004 5:02 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] Interrupt latency (again!)
Importance: Low

Hi everyone,

First some context: we must use Windows 2000/XP in our embedded systems
(running on dual Xeon systems, with ACPI enabled and latest Intel chipset).
We know that W2K/XP is not a real-time OS and we can live with it - however,
we do require a reasonable and deterministic PCI interrupt latency. Hoping
to achieve this, we changed our PCI device’s interrupt vector to the highest
(0xff) by reprogramming the APIC and changing the IDT accordingly.

Then we measured our latency as follows::

{ disable interrupts
store TIME1
write to a register in our hardware (PCI) to trigger the interrupt
enable interrupts
}
PCI Interrupt –>IDT–>store TIME2 —> Windows interrupt preamble

(TIME2 - TIME1) should then give us a pretty pure interrupt latency, from
the time the I/O APIC pin is set and the time the CPU core is interrupted by
the local APIC.

Result: we get occasional 150 uSec glitches (approx. once / 24 hours) that
we still can’t explain. We know that no other interrupt or exception
occurred between the trigger and the interrupt. That said, here are some
questions:

1)Is there a way to determine if another driver or Windows has disabled the
interrupts or elevated the priority in the local APIC TPR?
2) Could it be that we are pre-empted by an SMI? Is there a way to determine
that an SMI has occurred?
3) We tried to disable the SMI on our machine by resetting the GBL_SMI_EN
bit in the SMI_EN register of a ICH3 South bridge. After doing that, we
observed awful glitches of 16 mSecs. Any idea why?

Thanks,

Patrick Laniel, Group Leader
CAE Inc.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as:
xxxxx@intersolutions.stpn.soft.net
To unsubscribe send a blank email to xxxxx@lists.osr.com

Thanks Dmitriy and Jake for your answers - much appreciated!

You mentioned that changing our interrupt vector might not help us. Can you
elaborate on this? Again, this is absolutely not meant for a driver that we
will publicly release but only for our embedded systems running our own
software. I guess this is similar to what a real-time extension would do.
Our tests show that the latency is not acceptable when we leave our
interrupt at a lower vector.

Thanks again,

Patrick Laniel, Group Leader
CAE Inc.

This is incredibly well covered in the archives.


Jake Oshins
Windows Kernel Group

This posting is provided “AS IS” with no warranties, and confers no rights.
“Patrick Laniel” wrote in message news:xxxxx@ntdev…
> Thanks Dmitriy and Jake for your answers - much appreciated!
>
> You mentioned that changing our interrupt vector might not help us. Can
> you
> elaborate on this? Again, this is absolutely not meant for a driver that
> we
> will publicly release but only for our embedded systems running our own
> software. I guess this is similar to what a real-time extension would do.
> Our tests show that the latency is not acceptable when we leave our
> interrupt at a lower vector.
>
> Thanks again,
>
> Patrick Laniel, Group Leader
> CAE Inc.
>

Jake Oshins wrote:

  1. Yes, it’s possible that the source of your delay is an SMI. In fact,
    it’s probable. But there’s no way to prove it without hooking up some sort
    of ICE or ITP. I’m not sure if SoftICE could help you, but it seems like it
    probably could.
  2. Disabling SMI will probably make your machine too unstable to use.
    Chipset vendors use SMIs to cover up errata. If you stop it from working,
    the cockroaches will start crawling out of the woodwork. I can’t directly
    explain what you observed, except to say that almost any behavior can happen
    when you violate the manufacturer’s correctness guarantees.

I find the combination of these 2 statements interesting. What Jake is
essentially saying is that *PCs* aren’t suitable for real-time work.

This is a very different statement than saying that “Windows isn’t a
real-time operating system”. It sounds intuitively correct, now that I
think of it that way, but I haven’t heard it stated that way before.

Comments anyone?

…/ray..

Please remove “.spamblock” from my email address if you need to contact
me outside the newsgroup.

> I find the combination of these 2 statements interesting. What Jake is

essentially saying is that *PCs* aren’t suitable for real-time work.

Depends upon your definition of realtime I think. What latency do you want?

Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

> Jake Oshins wrote:

> 2) Yes, it’s possible that the source of your delay is an
SMI. In fact,
> it’s probable. But there’s no way to prove it without
hooking up some sort
> of ICE or ITP. I’m not sure if SoftICE could help you, but
it seems like it
> probably could.
> 3) Disabling SMI will probably make your machine too
unstable to use.
> Chipset vendors use SMIs to cover up errata. If you stop
it from working,
> the cockroaches will start crawling out of the woodwork. I
can’t directly
> explain what you observed, except to say that almost any
behavior can happen
> when you violate the manufacturer’s correctness guarantees.

I find the combination of these 2 statements interesting.
What Jake is
essentially saying is that *PCs* aren’t suitable for real-time work.

This is a very different statement than saying that “Windows isn’t a
real-time operating system”. It sounds intuitively correct,
now that I
think of it that way, but I haven’t heard it stated that way before.

Comments anyone?

I’d say that fixing one, doesn’t fix the other. I don’t expect Windows to
work as a real-time OS EVER, as this is not what it’s intended for.

If we for the moment ignore the Windows part of the discussion, my
experience is that it’s entirely possible to find hardware that works in a
real-time environment. However, it is also possible that very similar
hardware with just the odd component replaced, will NOT work in a real-time
environment.

Last time I dabbled with these things, I was using 486 processors, and I
measured the real-time interrupt response. It was CONSTANTLY less than 10 us
over a 24hr period, averaging about 5 us. This, of course, was not using
Windows.

As the chipsets in modern PC’s are more complex, and sometimes require some
software workarounds to allow them to be used, there is a great chance that
some of these software workarounds are done using SMI. Similarly, USB
keyboard interaction has traditionally been solved using SMI.

For short interrupt latency, it’s necessary to FIND the hardware that will
support it. Not just go out and buy the first motherboard found in a shop.
And it may be that you need to find the correct BIOS for the purpose [so
that you can disable any SMI that isn’t necessary for the purposes of your
system]. Someone else mentioned General Software’s products. What I’ve seen
of them, it’s very good. Other suppliers do exist.

Another possibility is to talk to one of the dedicated embedded system
manufacturers. Some of them build pretty powerful systems, and are more
likely to help out with interrupt latency or other hardware/software
interface problems.

Going back onto Windows. If you have “intermediate real-time needs”, I’m
sure Windows is going to be fine. Especially if the system builder has 100%
control over what hardware/software combination goes into the system. But if
the real-time needs is “hard real-time”, then it’s nearly guaranteed that
Windows will not be enough. Look for a solution where you get a different
piece of hardware to do the hard real-time stuff, and then transfer that to
Windows, for instance via Ethernet or proprietary solution (e.g a PCI card
with a processor and a “dual port” memory).


Mats


…/ray..

Please remove “.spamblock” from my email address if you need
to contact
me outside the newsgroup.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@3dlabs.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

I think the general definition of a real-time operating system is that
some small and *specific* latency is guaranteed. I.e. for some class of
usable interrupts, you can count on *exactly* how long the latency will
be every time you’re called. The actual latency would determine how
*good* a real-time OS it was, and whether it was suitable to a
particular need, of course.

On the other hand it’s been many years since I wrote one :-).

Maxim S. Shatskih wrote:

>I find the combination of these 2 statements interesting. What Jake is
>essentially saying is that *PCs* aren’t suitable for real-time work.

Depends upon your definition of realtime I think. What latency do you want?

Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com


…/ray..

Please remove “.spamblock” from my email address if you need to contact
me outside the newsgroup.

> I think the general definition of a real-time operating

system is that
some small and *specific* latency is guaranteed. I.e. for
some class of
usable interrupts, you can count on *exactly* how long the
latency will
be every time you’re called. The actual latency would determine how
*good* a real-time OS it was, and whether it was suitable to a
particular need, of course.

On the other hand it’s been many years since I wrote one :-).

I’d take a point out of this:
“you can count on exactly how long the latecny will be every time you’re
called”

I hope you mean the MAXIMUM latency. Only the highest priority interrupt
will ever have something that resembles a fixed latency, and most OS’s will
have sections of code that disables interrupts or in some other way prevents
any other code from running (spinlocks for instance).

Anything below the highest priority will have a latency of “highest priority
interrupt time” + “max latency for a interrupt of highest priority”. Of
course, this is still the maximum latency, if there is no highest priority
interrupt ongoing, and the processor is sitting in user-land code at the
moment, it will just wizz away to the interrupt, almost instantaneously.

I’ve seen some doctorands describe how you should write code that always
take the same amount of time, irrespective of which path is taken, and other
similarly theoretical ideas. It’s good as thought experiments, but becomes
increasingly difficult with modern superscalar, out of order executing
microprocessors (how many nop’s do you need for a PADD xmm1, xmm2).


Mats

Well, for 1 processor, the cardinality of the “class of usable
interrupts” would typically be 1, of course. Most real-time OS’s I’ve
used/written have offered at least that 1 fixed-latency interrupt level
(to the granularity of a single pipelined instruction anyway… only on
an old-style “true” RISC processor is this literally fixed).

But from what has been described here, random general-use PCs don’t have
even 1 level of software-usable known-latency interrupts, to any
definable granularity (because of potentially arbitrary length SMIs),
which in many people’s book means that they can’t be real-time. As you
said, one could put together a specific system that had this
characteristic. But *no* OS will save you from the general case…

If MAXIMUM latency were all that mattered, barring deadlocks, bugs,
etc., you can define a maximum latency that a particular Windows
installation will encounter for some sufficiently high level of
interrupt. It might not be low enough to satisfy certain “real-time”
needs, but I’ll bet it’s lower than just about anything you could have
found/bought 20 years ago :-)… Heck, the OP was complaining about
random 150uS latencies… Luxury!!! Why, I remember when…

Now, low latency thread scheduling is a different matter. Windows is
still hopeless at that…

xxxxx@3Dlabs.com wrote:

>I think the general definition of a real-time operating
>system is that
>some small and *specific* latency is guaranteed. I.e. for
>some class of
>usable interrupts, you can count on *exactly* how long the
>latency will
>be every time you’re called. The actual latency would determine how
>*good* a real-time OS it was, and whether it was suitable to a
>particular need, of course.
>
>On the other hand it’s been many years since I wrote one :-).
>

I’d take a point out of this:
“you can count on exactly how long the latecny will be every time you’re
called”

I hope you mean the MAXIMUM latency. Only the highest priority interrupt
will ever have something that resembles a fixed latency, and most OS’s will
have sections of code that disables interrupts or in some other way prevents
any other code from running (spinlocks for instance).

Anything below the highest priority will have a latency of “highest priority
interrupt time” + “max latency for a interrupt of highest priority”. Of
course, this is still the maximum latency, if there is no highest priority
interrupt ongoing, and the processor is sitting in user-land code at the
moment, it will just wizz away to the interrupt, almost instantaneously.

I’ve seen some doctorands describe how you should write code that always
take the same amount of time, irrespective of which path is taken, and other
similarly theoretical ideas. It’s good as thought experiments, but becomes
increasingly difficult with modern superscalar, out of order executing
microprocessors (how many nop’s do you need for a PADD xmm1, xmm2).


Mats


…/ray..

Please remove “.spamblock” from my email address if you need to contact
me outside the newsgroup.

Twenty years back, well just about 18 yrs back, I was involved in a
simulation for high speed switching for network nodes, and at that time
almost all of the gurus were trying to comeup with a very large set of
register banks, and wired them in a way so that you would not need to have
too much context switching, basically the paradigm was bit different… Then
of course it was between tcp/ip and sna (sorry folks for mentioning
sna:) )… Even then with all the priority based hardwared scheduling when
it came to support different window-pacing, sequence numbering, M/M/k
(markovian queues) G/G/k, leaky-bucket controlling we really had to throw
our hands up - There was a french guy he literally plotted the path lenghts
to look like new-york subway map, including booklyn and long-island :).

Now if a random intel based board has those SMI handling under the table, it
would be futile to not considering them when we do the accounting for
realtime (some very small time interval)…

On the top of it, Windows is BY DESIGN, FOR DESIGN, OF DESIGN a general
purpose OS, so why someone should backfit. Why should we use a Jacket as a
pant :), sure we can use temprarily, but might not look very trendy !

But if you look at the CE or XP-embedded bench mark, it matches up to
hard-real time, and lot of kernel ideas are incorporated from NT. I WOULD BE
REALLY SURPRISED, IF MS DOES NOT HAVE AT LEAST 5 TO 10 DIFFERENT FLAVORS OF
OSes under their sleeves. What Jake mentioned is sort of hidden cost
associated with those boards to cover chipset flaws, and that completely
make sense to me !!!

-pro

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of Ray Trent
Sent: Thursday, May 13, 2004 4:31 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] Interrupt latency (again!)

Well, for 1 processor, the cardinality of the “class of usable
interrupts” would typically be 1, of course. Most real-time OS’s I’ve
used/written have offered at least that 1 fixed-latency interrupt level
(to the granularity of a single pipelined instruction anyway… only on
an old-style “true” RISC processor is this literally fixed).

But from what has been described here, random general-use PCs don’t have
even 1 level of software-usable known-latency interrupts, to any
definable granularity (because of potentially arbitrary length SMIs),
which in many people’s book means that they can’t be real-time. As you
said, one could put together a specific system that had this
characteristic. But *no* OS will save you from the general case…

If MAXIMUM latency were all that mattered, barring deadlocks, bugs,
etc., you can define a maximum latency that a particular Windows
installation will encounter for some sufficiently high level of
interrupt. It might not be low enough to satisfy certain “real-time”
needs, but I’ll bet it’s lower than just about anything you could have
found/bought 20 years ago :-)… Heck, the OP was complaining about
random 150uS latencies… Luxury!!! Why, I remember when…

Now, low latency thread scheduling is a different matter. Windows is
still hopeless at that…

xxxxx@3Dlabs.com wrote:

>I think the general definition of a real-time operating
>system is that
>some small and *specific* latency is guaranteed. I.e. for
>some class of
>usable interrupts, you can count on *exactly* how long the
>latency will
>be every time you’re called. The actual latency would determine how
>*good* a real-time OS it was, and whether it was suitable to a
>particular need, of course.
>
>On the other hand it’s been many years since I wrote one :-).
>

I’d take a point out of this:
“you can count on exactly how long the latecny will be every time you’re
called”

I hope you mean the MAXIMUM latency. Only the highest priority interrupt
will ever have something that resembles a fixed latency, and most OS’s
will
have sections of code that disables interrupts or in some other way
prevents
any other code from running (spinlocks for instance).

Anything below the highest priority will have a latency of “highest
priority
interrupt time” + “max latency for a interrupt of highest priority”. Of
course, this is still the maximum latency, if there is no highest priority
interrupt ongoing, and the processor is sitting in user-land code at the
moment, it will just wizz away to the interrupt, almost instantaneously.

I’ve seen some doctorands describe how you should write code that always
take the same amount of time, irrespective of which path is taken, and
other
similarly theoretical ideas. It’s good as thought experiments, but becomes
increasingly difficult with modern superscalar, out of order executing
microprocessors (how many nop’s do you need for a PADD xmm1, xmm2).


Mats


…/ray..

Please remove “.spamblock” from my email address if you need to contact
me outside the newsgroup.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@garlic.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Prokash Sinha wrote:

Now if a random intel based board has those SMI handling under the table, it
would be futile to not considering them when we do the accounting for
realtime (some very small time interval)…

On the top of it, Windows is BY DESIGN, FOR DESIGN, OF DESIGN a general
purpose OS, so why someone should backfit. Why should we use a Jacket as a
pant :), sure we can use temprarily, but might not look very trendy !

But if you look at the CE or XP-embedded bench mark, it matches up to
hard-real time, and lot of kernel ideas are incorporated from NT. I WOULD BE
REALLY SURPRISED, IF MS DOES NOT HAVE AT LEAST 5 TO 10 DIFFERENT >FLAVORS OF
OSes under their sleeves. What Jake mentioned is sort of hidden cost
associated with those boards to cover chipset flaws, and that completely
make sense to me !!!

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

In my original post, I didn’t mean to trigger another debate about the
Windows real-timeness, a subject that has already been beaten to death :slight_smile:

I was simply wondering if I could reliably perform this simple task: update
a few registers in my hardware upon interrupt. I don’t need all the other
aspects of a real-time OS (for example, guaranteed thread scheduling). Our
project requirements are forcing us to use XP and I could have used a
real-time extension, but I don’t think that it would have done any better
(on that specific aspect) and we don’t need all the other features (and
complexity) that comes with it.

From the results of our tests and the answers we got from this forum, it
seems that it comes down to choosing the right motherboard with the right
BIOS… Actually, we just performed some interrupt tests on more machines
and on one of them, we obtained interesting results: we were able to get a
max latency of 15 uSecs over a period of 24 hours. Have-we just been lucky?
We’ll do some more testing.

Patrick Laniel
CAE Inc.

> In my original post, I didn’t mean to trigger another debate about the

Windows real-timeness, a subject that has already been beaten
to death :slight_smile:

I was simply wondering if I could reliably perform this
simple task: update
a few registers in my hardware upon interrupt. I don’t need
all the other
aspects of a real-time OS (for example, guaranteed thread
scheduling). Our
project requirements are forcing us to use XP and I could have used a
real-time extension, but I don’t think that it would have
done any better
(on that specific aspect) and we don’t need all the other
features (and
complexity) that comes with it.

From the results of our tests and the answers we got from
this forum, it
seems that it comes down to choosing the right motherboard
with the right
BIOS… Actually, we just performed some interrupt tests on
more machines
and on one of them, we obtained interesting results: we were
able to get a
max latency of 15 uSecs over a period of 24 hours. Have-we
just been lucky?
We’ll do some more testing.

I think, if you can find a board that doesn’t have a bad defect somewhere in
the chipset the requires an SMI to fix it, you should be fine. And it’s also
possible that the BIOS on a particular system will be better at “fixing” a
particular chipset bug/defect, in as much as it’s able to identify correctly
whether or not some feature in the chipset is turned on, rather than blindly
enabling an SMI to fix something that isn’t being used anyways. Or it’s just
not doing the SMI as often… :wink:

Also, it’s more likely that you’ll have luck on these things with mature,
long-been-in-production chipsets than with the recently introduced chipsets.
[Avoiding VIA might also help, in my personal experience, but I’m sure they
have improved a lot in the last few years].


Mats

Patrick Laniel
CAE Inc.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@3dlabs.com
To unsubscribe send a blank email to xxxxx@lists.osr.com