Antwort: how to program I/O

First off, I’m rather new to windows drivers as well, but I can share some
things that I’ve learned so far…

Devices that require a specific (short) timing to work properly are not
really “good” for windows.
Basically there are 2 things one can do:

  1. disallow interrupts (cli)
  2. raise IRQL (KeRaiseIrql)

Disabling interrupts (cli) is generally considered a bad idea, maybe under
some special conditions it’s ok - but just don’t do until you know what
exactly you’re doing :slight_smile:
(The “cli” command is not even mentioned in the windows NT/2000 DDK)
Raising IRQL has a similar effect - no interrupts with an IRQL
lower-or-equal than the current IRQL are processed (on the current
processor), although small delays can and probably will occur even when
raising to CLOCK_LEVEL or higher, but that depends on the hardware and on
the kernel/HAL that’s used.

Anyway, no matter what, you can never assume that your code will not be
delayed for at least some microseconds. (Things like the NMI for example -
of course the NMI can be disabled as well, but that should really be a bad
idea)

See “Always Preemptible and Always Interruptible” and “Managing Hardware
Priorities” in the DDK documentation for some general infromation on the
topic.

The ugly side of all this is that there are still so many devices that
require very low latencies or strict timing to work properly, and still so
many drivers that introduce too many delays at high IRQL.

Regards,

Paul Groke

“Alan Kung”
Gesendet von: xxxxx@lists.osr.com
11.06.2004 04:10
Bitte antworten an “Windows System Software Devs Interest List”

An: “Windows System Software Devs Interest List”

Kopie:
Thema: [ntdev] how to program I/O

Hi every 1:

I am a novice in NT driver writing, I am studying the programming windows
driver model.

In the book of Walter Oney:

Chapter 4 Synchronization
He say

The operating system can preempt any subroutine at any moment for an
arbitrarily long period of time, …

if so, when the H/W programming is not interrupt driven, and it must
follow some H/W communication protocol,
How to do the I/O programming
that can follow H/W handshake timing protocol ? (If the I/O routine is
doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)

Best Regards,

Alan

Dear Paul:

Thank’s very much for your information.

Do you know
how to decide what IRQL level a driver should raise to ?

many drivers that introduce too many delays at high IRQL.

does this impact system performance ? Is there anyway to improve it ?

Best Regards

Alan
----- Original Message -----
From:
To: Windows System Software Devs Interest List
Sent: Friday, June 11, 2004 11:13 AM
Subject: Antwort: [ntdev] how to program I/O

> First off, I’m rather new to windows drivers as well, but I can share some
> things that I’ve learned so far…
>
> Devices that require a specific (short) timing to work properly are not
> really “good” for windows.
> Basically there are 2 things one can do:
> 1) disallow interrupts (cli)
> 2) raise IRQL (KeRaiseIrql)
>
> Disabling interrupts (cli) is generally considered a bad idea, maybe under
> some special conditions it’s ok - but just don’t do until you know what
> exactly you’re doing :slight_smile:
> (The “cli” command is not even mentioned in the windows NT/2000 DDK)
> Raising IRQL has a similar effect - no interrupts with an IRQL
> lower-or-equal than the current IRQL are processed (on the current
> processor), although small delays can and probably will occur even when
> raising to CLOCK_LEVEL or higher, but that depends on the hardware and on
> the kernel/HAL that’s used.
>
> Anyway, no matter what, you can never assume that your code will not be
> delayed for at least some microseconds. (Things like the NMI for example -
> of course the NMI can be disabled as well, but that should really be a bad
> idea)
>
> See “Always Preemptible and Always Interruptible” and “Managing Hardware
> Priorities” in the DDK documentation for some general infromation on the
> topic.
>
> The ugly side of all this is that there are still so many devices that
> require very low latencies or strict timing to work properly, and still so
> many drivers that introduce too many delays at high IRQL.
>
> Regards,
>
> Paul Groke
>
>
>
>
>
> “Alan Kung”
> Gesendet von: xxxxx@lists.osr.com
> 11.06.2004 04:10
> Bitte antworten an “Windows System Software Devs Interest List”
>
> An: “Windows System Software Devs Interest List”
>
> Kopie:
> Thema: [ntdev] how to program I/O
>
>
>
> Hi every 1:
>
> I am a novice in NT driver writing, I am studying the programming windows
> driver model.
>
> In the book of Walter Oney:
>
> Chapter 4 Synchronization
> He say
>
> The operating system can preempt any subroutine at any moment for an
> arbitrarily long period of time, …
>
> if so, when the H/W programming is not interrupt driven, and it must
> follow some H/W communication protocol,
> How to do the I/O programming
> that can follow H/W handshake timing protocol ? (If the I/O routine is
> doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)
>
>
> Best Regards,
>
> Alan
>
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@xgitech.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com

Alan,

What are you trying to achieve and exactly what sort of timing requirements
do you have?

Obviously, what Walter writes in the book is true, but to a certain extent,
you can actually get fairly good timing within Windows, as long as you can
live with the fact that it’s never going to be extremely precise.

As someone else mentioned, if you have STRICT real-time requirements, the
hardware isn’t going to work well in a Windows environment (or any other
environment where task scheduling isn’t based on strict priorities where
tasks/processes/threads of the same priority are allowed to be scheduled in,
so Linux, Unix, OS/2, OS X, VMS etc, are also out of the question).

The really best solutin to this is to have a dedicated processor work on the
side of your Windows processor (such as a small microcontroller fitted to
the hardware you’re trying to control). Or, of course, build more
“intelligence” into the hardware itself, so that it’s more autonomous and
can handle the fact that it’s not being told exactly what to do at every
single moment in time, but can “behave itself if left ignored”.

One of the things that Paul didn’t mention is SMI (System management
interrupt). This is an interrupt that has higher priority than NMI, and goes
completely outside the existing interrupt system in as much as it hasn’t got
a vector in the normal sense, and it’s very “self-controlled”. It also has
the “feature” that it goes into a special mode of the processor, after
saving everything all registers in memory. All of this makes SMI almost
impossible to predict. Also, there is not a common place for SMI to be
turned off, because it’s part of the chipset features, and each chipset
manufacturer will have their own implementation of SMI handling. Also, it
may not be a good idea to turn of SMI anyways, because it’s often used to
cover over chipset bugs or “strange hardware quirks”. For instance, SMI is
often used to reroute a particular I/O port to a different address, so that
something that is non-standard hardware can appear to be standard.

SMI’s can take several microseconds (or in some cases MANY microseconds).
Aside from that, you also have to contend with chipset features that block
PCI access for a few hundred microseconds, and this can happen without your
control either, such as for instance a DMA access from a hard disk may cause
the chipset to “block PCI access” for a significant period (this is of
course not how things SHOULD be designed, but it does happen that chipsets
have these type of “misfeatures”).

So if you expect this to work in any reasonable PC, you’re probably going to
have to think about how to fix the hardware.

Oh, and your question about “How does this affect the performance?” is a bit
like “How long is a piece of string?”. It probably can be answered by the
answer about “What are you trying to achieve … ?”, but of course, you’re
going to affect the performance of the system if you’re spending several
milliseconds looping around and creating “precise timing” for some I/O
device.


Mats

-----Original Message-----
From: Alan Kung [mailto:xxxxx@xgitech.com]
Sent: Friday, June 11, 2004 3:10 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] how to program I/O

Hi every 1:

I am a novice in NT driver writing, I am studying the programming windows
driver model.

In the book of Walter Oney:

be086ea0_ffe6_4117_b08a_ffc2cfa2864cChapter 4
BM_182751ec_64a9_431e_96ad_9f3cb5d9a6eeSynchronization
He say

The operating system can preempt any subroutine at any moment for an
arbitrarily long period of time, …

if so, when the H/W programming is not interrupt driven, and it must follow
some H/W communication protocol,
How to do the I/O programming
that can follow H/W handshake timing protocol ? (If the I/O routine is
doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)

Best Regards,

Alan

Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@3dlabs.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Mats already covered most of your questions, so I’ll keep it short.

Do you know how to decide what IRQL level a driver should raise to ?

Well, yes, just avoid raising IRQL if possible. Or go to
“DISPATCH_LEVEL” which only forbids thread-scheduling. If some short
task has to be done quickly and uninterrupted, go to (CLOCK2_LEVEL-
1). That will allow the timer interrupt through and block almost
everything else. But whenever you do so you should think of that piece
of code like being an interrupt-handler. Keep it as short as possible.
(If CLOCK_LEVEL still introduces too much delays, use POWER_LEVEL or
HIGH_LEVEL)

does this impact system performance ?

Well, that depends. For example the original windows ATAPI driver
spends huge amounts of time in an interrupt-handler when using PIO
mode. Usually it speeds up things a little (because disk-io is done
faster that way), but it can “hurt” other drivers. For example it
messes up sound- output on some systems (mine included, the affected
soundcard uses 15msec DMA buffers - that’s already pretty much for
today). And it will affect other disk-IO that maybe uses DMA, because
when one transfer completes, the next one can only be started when
there if cpu-time available, and that might only be after the ATAPI
driver is done polling the device in the ISR.

Is there anyway to improve it ?

The only thing I would recommend is use “well behaved” hardware.
Choose intel, amd, adaptec, creative, … over VIA, SiS, … . Other
than that there is not much one can do, of course except disabling
“bad” devices. Besides, even for interrupt driven devices there is
very little one can do.

Oh yes, I was thinking you ask just out of general interest, so if you
have some distinct task/device in mind, let us know, maybe we can
help.

Regards,

Paul Groke

“Alan Kung”
Gesendet von: xxxxx@lists.osr.com
11.06.2004 08:04
Bitte antworten an “Windows System Software Devs Interest List”

An: “Windows System Software Devs Interest List”

Kopie:
Thema: Re: Antwort: [ntdev] how to program I/O

Dear Paul:

Thank’s very much for your information.

Do you know
how to decide what IRQL level a driver should raise to ?

> many drivers that introduce too many delays at high IRQL.

does this impact system performance ? Is there anyway to improve it ?

Best Regards

Alan
----- Original Message -----
From:
To: Windows System Software Devs Interest List
Sent: Friday, June 11, 2004 11:13 AM
Subject: Antwort: [ntdev] how to program I/O

> First off, I’m rather new to windows drivers as well, but I can share
some
> things that I’ve learned so far…
>
> Devices that require a specific (short) timing to work properly are not
> really “good” for windows.
> Basically there are 2 things one can do:
> 1) disallow interrupts (cli)
> 2) raise IRQL (KeRaiseIrql)
>
> Disabling interrupts (cli) is generally considered a bad idea, maybe
under
> some special conditions it’s ok - but just don’t do until you know what
> exactly you’re doing :slight_smile:
> (The “cli” command is not even mentioned in the windows NT/2000 DDK)
> Raising IRQL has a similar effect - no interrupts with an IRQL
> lower-or-equal than the current IRQL are processed (on the current
> processor), although small delays can and probably will occur even when
> raising to CLOCK_LEVEL or higher, but that depends on the hardware and
on
> the kernel/HAL that’s used.
>
> Anyway, no matter what, you can never assume that your code will not be
> delayed for at least some microseconds. (Things like the NMI for example
-
> of course the NMI can be disabled as well, but that should really be a
bad
> idea)
>
> See “Always Preemptible and Always Interruptible” and “Managing Hardware
> Priorities” in the DDK documentation for some general infromation on the
> topic.
>
> The ugly side of all this is that there are still so many devices that
> require very low latencies or strict timing to work properly, and still
so
> many drivers that introduce too many delays at high IRQL.
>
> Regards,
>
> Paul Groke
>
>
>
>
>
> “Alan Kung”
> Gesendet von: xxxxx@lists.osr.com
> 11.06.2004 04:10
> Bitte antworten an “Windows System Software Devs Interest List”
>
> An: “Windows System Software Devs Interest List”
>
> Kopie:
> Thema: [ntdev] how to program I/O
>
>
>
> Hi every 1:
>
> I am a novice in NT driver writing, I am studying the programming
windows
> driver model.
>
> In the book of Walter Oney:
>
> Chapter 4 Synchronization
> He say
>
> The operating system can preempt any subroutine at any moment for an
> arbitrarily long period of time, …
>
> if so, when the H/W programming is not interrupt driven, and it must
> follow some H/W communication protocol,
> How to do the I/O programming
> that can follow H/W handshake timing protocol ? (If the I/O routine is
> doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)
>
>
> Best Regards,
>
> Alan
>
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@xgitech.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

Alan,

I think u can enclose your timing-dependent protocol code within CLI
and STI.

Ex: While opening the device you would want to do some hand saking with
the hardware device, then in your open driver call function you have the

CLI at the beginning and STI at the end.

mydriverOpen(…) {

CLI;

/* you handshake code */

STI;

}

But this is will have side-effects on your system time.

Cheers
Kiran

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@3Dlabs.com
Sent: Friday, June 11, 2004 3:26 PM
To: Windows System Software Devs Interest List
Subject: RE: [ntdev] how to program I/O

Alan,

What are you trying to achieve and exactly what sort of timing
requirements do you have?

Obviously, what Walter writes in the book is true, but to a certain
extent, you can actually get fairly good timing within Windows, as long
as you can live with the fact that it’s never going to be extremely
precise.

As someone else mentioned, if you have STRICT real-time requirements,
the hardware isn’t going to work well in a Windows environment (or any
other environment where task scheduling isn’t based on strict priorities
where tasks/processes/threads of the same priority are allowed to be
scheduled in, so Linux, Unix, OS/2, OS X, VMS etc, are also out of the
question).

The really best solutin to this is to have a dedicated processor work on
the side of your Windows processor (such as a small microcontroller
fitted to the hardware you’re trying to control). Or, of course, build
more “intelligence” into the hardware itself, so that it’s more
autonomous and can handle the fact that it’s not being told exactly what
to do at every single moment in time, but can “behave itself if left
ignored”.

One of the things that Paul didn’t mention is SMI (System management
interrupt). This is an interrupt that has higher priority than NMI, and
goes completely outside the existing interrupt system in as much as it
hasn’t got a vector in the normal sense, and it’s very
“self-controlled”. It also has the “feature” that it goes into a special
mode of the processor, after saving everything all registers in memory.
All of this makes SMI almost impossible to predict. Also, there is not a
common place for SMI to be turned off, because it’s part of the chipset
features, and each chipset manufacturer will have their own
implementation of SMI handling. Also, it may not be a good idea to turn
of SMI anyways, because it’s often used to cover over chipset bugs or
“strange hardware quirks”. For instance, SMI is often used to reroute a
particular I/O port to a different address, so that something that is
non-standard hardware can appear to be standard.

SMI’s can take several microseconds (or in some cases MANY
microseconds). Aside from that, you also have to contend with chipset
features that block PCI access for a few hundred microseconds, and this
can happen without your control either, such as for instance a DMA
access from a hard disk may cause the chipset to “block PCI access” for
a significant period (this is of course not how things SHOULD be
designed, but it does happen that chipsets have these type of
“misfeatures”).

So if you expect this to work in any reasonable PC, you’re probably
going to have to think about how to fix the hardware.

Oh, and your question about “How does this affect the performance?” is a
bit like “How long is a piece of string?”. It probably can be answered
by the answer about “What are you trying to achieve … ?”, but of
course, you’re going to affect the performance of the system if you’re
spending several milliseconds looping around and creating “precise
timing” for some I/O device.


Mats

-----Original Message-----
From: Alan Kung [mailto:xxxxx@xgitech.com]
Sent: Friday, June 11, 2004 3:10 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] how to program I/O

Hi every 1:

I am a novice in NT driver writing, I am studying the programming
windows driver model.

In the book of Walter Oney:

Chapter 4 Synchronization
He say

The operating system can preempt any subroutine at any moment for an
arbitrarily long period of time, …

if so, when the H/W programming is not interrupt driven, and it must
follow some H/W communication protocol,
How to do the I/O programming
that can follow H/W handshake timing protocol ? (If the I/O routine is
doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)

Best Regards,

Alan

Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@3dlabs.com
To unsubscribe send a blank email to xxxxx@lists.osr.com


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@wipro.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Thank’s Mats and Paul:

There is just some old concept in my programming experience.

still a little bit curious ,
“DISPATCH_LEVEL” which only forbids thread-scheduling,
because I think dispatcher runs at DPC level,so if some higher level H/W
interrupt occurs,
how can O.S. switch current DPC level thread to H/W’s ISR ? ( because
dispatcher is at DPC level , ISR is at DIRQL)

Best Regards,

Alan
----- Original Message -----
From:
To: Windows System Software Devs Interest List
Sent: Friday, June 11, 2004 6:24 PM
Subject: Re: Antwort: [ntdev] how to program I/O

> Mats already covered most of your questions, so I’ll keep it short.
>
> > Do you know how to decide what IRQL level a driver should raise to ?
>
> Well, yes, just avoid raising IRQL if possible. Or go to
> “DISPATCH_LEVEL” which only forbids thread-scheduling. If some short
> task has to be done quickly and uninterrupted, go to (CLOCK2_LEVEL-
> 1). That will allow the timer interrupt through and block almost
> everything else. But whenever you do so you should think of that piece
> of code like being an interrupt-handler. Keep it as short as possible.
> (If CLOCK_LEVEL still introduces too much delays, use POWER_LEVEL or
> HIGH_LEVEL)
>
> > does this impact system performance ?
>
> Well, that depends. For example the original windows ATAPI driver
> spends huge amounts of time in an interrupt-handler when using PIO
> mode. Usually it speeds up things a little (because disk-io is done
> faster that way), but it can “hurt” other drivers. For example it
> messes up sound- output on some systems (mine included, the affected
> soundcard uses 15msec DMA buffers - that’s already pretty much for
> today). And it will affect other disk-IO that maybe uses DMA, because
> when one transfer completes, the next one can only be started when
> there if cpu-time available, and that might only be after the ATAPI
> driver is done polling the device in the ISR.
>
> > Is there anyway to improve it ?
>
> The only thing I would recommend is use “well behaved” hardware.
> Choose intel, amd, adaptec, creative, … over VIA, SiS, … . Other
> than that there is not much one can do, of course except disabling
> “bad” devices. Besides, even for interrupt driven devices there is
> very little one can do.
>
> Oh yes, I was thinking you ask just out of general interest, so if you
> have some distinct task/device in mind, let us know, maybe we can
> help.
>
> Regards,
>
> Paul Groke
>
>
>
>
> “Alan Kung”
> Gesendet von: xxxxx@lists.osr.com
> 11.06.2004 08:04
> Bitte antworten an “Windows System Software Devs Interest List”
>
> An: “Windows System Software Devs Interest List”
>
> Kopie:
> Thema: Re: Antwort: [ntdev] how to program I/O
>
>
> Dear Paul:
>
> Thank’s very much for your information.
>
> Do you know
> how to decide what IRQL level a driver should raise to ?
>
>
> > many drivers that introduce too many delays at high IRQL.
>
> does this impact system performance ? Is there anyway to improve it ?
>
>
>
> Best Regards
>
> Alan
> ----- Original Message -----
> From:
> To: Windows System Software Devs Interest List
> Sent: Friday, June 11, 2004 11:13 AM
> Subject: Antwort: [ntdev] how to program I/O
>
>
> > First off, I’m rather new to windows drivers as well, but I can share
> some
> > things that I’ve learned so far…
> >
> > Devices that require a specific (short) timing to work properly are not
> > really “good” for windows.
> > Basically there are 2 things one can do:
> > 1) disallow interrupts (cli)
> > 2) raise IRQL (KeRaiseIrql)
> >
> > Disabling interrupts (cli) is generally considered a bad idea, maybe
> under
> > some special conditions it’s ok - but just don’t do until you know what
> > exactly you’re doing :slight_smile:
> > (The “cli” command is not even mentioned in the windows NT/2000 DDK)
> > Raising IRQL has a similar effect - no interrupts with an IRQL
> > lower-or-equal than the current IRQL are processed (on the current
> > processor), although small delays can and probably will occur even when
> > raising to CLOCK_LEVEL or higher, but that depends on the hardware and
> on
> > the kernel/HAL that’s used.
> >
> > Anyway, no matter what, you can never assume that your code will not be
> > delayed for at least some microseconds. (Things like the NMI for example
> -
> > of course the NMI can be disabled as well, but that should really be a
> bad
> > idea)
> >
> > See “Always Preemptible and Always Interruptible” and “Managing Hardware
> > Priorities” in the DDK documentation for some general infromation on the
> > topic.
> >
> > The ugly side of all this is that there are still so many devices that
> > require very low latencies or strict timing to work properly, and still
> so
> > many drivers that introduce too many delays at high IRQL.
> >
> > Regards,
> >
> > Paul Groke
> >
> >
> >
> >
> >
> > “Alan Kung”
> > Gesendet von: xxxxx@lists.osr.com
> > 11.06.2004 04:10
> > Bitte antworten an “Windows System Software Devs Interest List”
> >
> > An: “Windows System Software Devs Interest List”
> >
> > Kopie:
> > Thema: [ntdev] how to program I/O
> >
> >
> >
> > Hi every 1:
> >
> > I am a novice in NT driver writing, I am studying the programming
> windows
> > driver model.
> >
> > In the book of Walter Oney:
> >
> > Chapter 4 Synchronization
> > He say
> >
> > The operating system can preempt any subroutine at any moment for an
> > arbitrarily long period of time, …
> >
> > if so, when the H/W programming is not interrupt driven, and it must
> > follow some H/W communication protocol,
> > How to do the I/O programming
> > that can follow H/W handshake timing protocol ? (If the I/O routine is
> > doing H/W I/O handshake ,and O.S. preempty the I/O routine for a time)
> >
> >
> > Best Regards,
> >
> > Alan
> >
> >
> > —
> > Questions? First check the Kernel Driver FAQ at
> http://www.osronline.com/article.cfm?id=256
> >
> > You are currently subscribed to ntdev as: xxxxx@xgitech.com
> > To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>
> —
> Questions? First check the Kernel Driver FAQ at
> http://www.osronline.com/article.cfm?id=256
>
>
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@xgitech.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com

> still a little bit curious , “DISPATCH_LEVEL” which only forbids

thread-scheduling, because I think dispatcher runs at DPC level,so if
some higher level H/W interrupt occurs, how can O.S. switch current
DPC level thread to H/W’s ISR ? ( because dispatcher is at DPC level ,
ISR is at DIRQL)

There is no thread-scheduling for interrupts, there is only a context-
switch that’s performed by the CPU itself depending on the kind of
interrupt-handler used (trap-gate, interrupt-gate, …). I’m not
really an expert when it comes to i386 builtin threading capabilities,
but usually you don’t have to know what exactly is done when an
interrupt fires, just that it works and that IRQL is being respected.
There can be some minor delays even when current IRQL >= DIRQL of an
interrupt that fires, but aside some rare situations that delays
shouldn’t hurt much, and there is also not much one could do when
“sticking to the rules”.

Regards,

Paul Groke

> > still a little bit curious , “DISPATCH_LEVEL” which only forbids

> thread-scheduling, because I think dispatcher runs at DPC
level,so if
> some higher level H/W interrupt occurs, how can O.S. switch current
> DPC level thread to H/W’s ISR ? ( because dispatcher is at
DPC level ,
> ISR is at DIRQL)

There is no thread-scheduling for interrupts, there is only a context-
switch that’s performed by the CPU itself depending on the kind of
interrupt-handler used (trap-gate, interrupt-gate, …). I’m not
really an expert when it comes to i386 builtin threading
capabilities,
but usually you don’t have to know what exactly is done when an
interrupt fires, just that it works and that IRQL is being respected.
There can be some minor delays even when current IRQL >= DIRQL of an
interrupt that fires, but aside some rare situations that delays
shouldn’t hurt much, and there is also not much one could do when
“sticking to the rules”.

Ok, here’s some more info on the subject (not that I know THAT much)…

  1. As Paul says, interrupts are handled by the CPU directly, but also by the
    NT kernel in some respect. To know what’s going on (in general terms),
    almost all OS’s will have a tiny bit of code that does something like:

save registers
save previous ISR pointer and set to an area for this particular
interrupt.
call the ISR routine as specified.
restore previous ISR pointer.
restore registers
IRET.

In i386, it is possible to get a lot of this work done by the core of the
processor (in a TASK GATE interrupt), but I’m not aware of any modern OS
that uses this for anything other than “panic mode”, for example when an
unexpected fault happened in the processing of a previous fault (Double
fault) or the stack has gone kaput.

  1. The time consumed by an interrupt is an indeterminate time. Interrupt
    handlers in Windows are not strictly regulated, and an interrupt can take
    ANY amount of time to “do it’s job”. So, in theory, an interrupt handler may
    do this:
    see that NUM-LOCK has been pressed.
    read the status of the Num-Lock LED.
    xor NUM-LOCK status.
    write the status of the Num-Lock LED.

That process takes about 4.2 milliseconds. I know that Windows standard
keyboard driver DOES NOT do this, but in the BIOS of a standard PC, the
keyboard handler interrupt WILL do this. [I actually traced it on a ICE once, because I was working on a keyboard emulation project].

Either way, the above solution to Num-lock handling would be perfectly
acceptable to the Windows operating system (although perhaps not
recommended). When I was at the Windows driver developers conference in
November, they said that they plan to introduce a standard where ISR’s are
only allowed so many microseconds, I think it was 50 or 100 us in ISR and so
much in DPC.

So if you have really strict criteria for timing, you need to set an IRQL
that disables the interrupts too. But this is of course something you can
only do for a shorter period of time, at least once the above regulations on
Windows ISR timing comes in, because if you prevent ISR’s from happening for
more than 50 or 100 us, you’re again breaking the rules. I would say that
it’s a very bad design on the hardware side if it requires that strict
timing (and it probably will not work in some systems because there is
always a chance of NMI or SMI which can not trivially be disabled by the
driver).


Mats

Regards,

Paul Groke


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@3dlabs.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

> 1. As Paul says, interrupts are handled by the CPU directly, but also

by the NT kernel in some respect. To know what’s going on (in general
terms), almost all OS’s will have a tiny bit of code that does
something like:

I wanted to point out that the processor handles the part where the
current task is being interrupted. Of course in windows and other
OS’ there is OS-code that does some things before handing over
control to a non-OS (driver, …) ISR.

So if you have really strict criteria for timing, you need to set an
IRQL
that disables the interrupts too.

I think when one needs absolute strict timing one should also CLI/STI,
because at least one current windows HAL does not disable any
interrupts when raising IRQL - it just checks IRQL when the interrupt
fires and “queues” the interrupt for later processing.
Something like:

IRQL is raised => store new IRQL value
INTX fires => OS’ interrupt handler is called
OS’ interrupt handler checks current IRQL vs. DIRQL of interrupt
OS’ “queues” the interrupt that just fired
OS’ returns control to whatever taks was running before

This makes sense as an optimization for better general system performance,
since the (A)PIC doesn’t have to be accessed every time IRQL is raised,
but it can be kind-of disturbing when IRQL is raised to HIGH_LEVEL and
short delays of some microseconds still occur.
Therefore one would have do disable interrupts by CLI/STI.
(Reprogrammind the (A)PIC should be out of the question since it takes
much longer and I’d consider it a major hack)

Regards,

Paul Groke

Paul & others,
I suggest you search the archives of this list for answers and
articles from ‘jake oshins’. He gave some valuable insights into the
irq handling of the the OS. The other good keyword to search for in the
archive is ‘DMI’.

CLI/STI will not help you. If you have real time constraints then use
an embedded processor.

Norbert.

“Consciousness: that annoying time between naps.”
---- snip ----

> November, they said that they plan to introduce a standard where ISR’s are

only allowed so many microseconds, I think it was 50 or 100 us in ISR and so
much in DPC.

This will be possible only when UARTs and PIO IDE will die out.

Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com