Twenty years back, well just about 18 yrs back, I was involved in a
simulation for high speed switching for network nodes, and at that time
almost all of the gurus were trying to comeup with a very large set of
register banks, and wired them in a way so that you would not need to have
too much context switching, basically the paradigm was bit different… Then
of course it was between tcp/ip and sna (sorry folks for mentioning
sna:) )… Even then with all the priority based hardwared scheduling when
it came to support different window-pacing, sequence numbering, M/M/k
(markovian queues) G/G/k, leaky-bucket controlling we really had to throw
our hands up - There was a french guy he literally plotted the path lenghts
to look like new-york subway map, including booklyn and long-island :).
Now if a random intel based board has those SMI handling under the table, it
would be futile to not considering them when we do the accounting for
realtime (some very small time interval)…
On the top of it, Windows is BY DESIGN, FOR DESIGN, OF DESIGN a general
purpose OS, so why someone should backfit. Why should we use a Jacket as a
pant :), sure we can use temprarily, but might not look very trendy !
But if you look at the CE or XP-embedded bench mark, it matches up to
hard-real time, and lot of kernel ideas are incorporated from NT. I WOULD BE
REALLY SURPRISED, IF MS DOES NOT HAVE AT LEAST 5 TO 10 DIFFERENT FLAVORS OF
OSes under their sleeves. What Jake mentioned is sort of hidden cost
associated with those boards to cover chipset flaws, and that completely
make sense to me !!!
-pro
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of Ray Trent
Sent: Thursday, May 13, 2004 4:31 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] Interrupt latency (again!)
Well, for 1 processor, the cardinality of the “class of usable
interrupts” would typically be 1, of course. Most real-time OS’s I’ve
used/written have offered at least that 1 fixed-latency interrupt level
(to the granularity of a single pipelined instruction anyway… only on
an old-style “true” RISC processor is this literally fixed).
But from what has been described here, random general-use PCs don’t have
even 1 level of software-usable known-latency interrupts, to any
definable granularity (because of potentially arbitrary length SMIs),
which in many people’s book means that they can’t be real-time. As you
said, one could put together a specific system that had this
characteristic. But *no* OS will save you from the general case…
If MAXIMUM latency were all that mattered, barring deadlocks, bugs,
etc., you can define a maximum latency that a particular Windows
installation will encounter for some sufficiently high level of
interrupt. It might not be low enough to satisfy certain “real-time”
needs, but I’ll bet it’s lower than just about anything you could have
found/bought 20 years ago :-)… Heck, the OP was complaining about
random 150uS latencies… Luxury!!! Why, I remember when…
Now, low latency thread scheduling is a different matter. Windows is
still hopeless at that…
xxxxx@3Dlabs.com wrote:
>I think the general definition of a real-time operating
>system is that
>some small and *specific* latency is guaranteed. I.e. for
>some class of
>usable interrupts, you can count on *exactly* how long the
>latency will
>be every time you’re called. The actual latency would determine how
>*good* a real-time OS it was, and whether it was suitable to a
>particular need, of course.
>
>On the other hand it’s been many years since I wrote one :-).
>
I’d take a point out of this:
“you can count on exactly how long the latecny will be every time you’re
called”
I hope you mean the MAXIMUM latency. Only the highest priority interrupt
will ever have something that resembles a fixed latency, and most OS’s
will
have sections of code that disables interrupts or in some other way
prevents
any other code from running (spinlocks for instance).
Anything below the highest priority will have a latency of “highest
priority
interrupt time” + “max latency for a interrupt of highest priority”. Of
course, this is still the maximum latency, if there is no highest priority
interrupt ongoing, and the processor is sitting in user-land code at the
moment, it will just wizz away to the interrupt, almost instantaneously.
I’ve seen some doctorands describe how you should write code that always
take the same amount of time, irrespective of which path is taken, and
other
similarly theoretical ideas. It’s good as thought experiments, but becomes
increasingly difficult with modern superscalar, out of order executing
microprocessors (how many nop’s do you need for a PADD xmm1, xmm2).
–
Mats
–
…/ray..
Please remove “.spamblock” from my email address if you need to contact
me outside the newsgroup.
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
You are currently subscribed to ntdev as: xxxxx@garlic.com
To unsubscribe send a blank email to xxxxx@lists.osr.com