function IRQL dependency

>

James, did you try that on an x64 system? I expect Verifier to catch
this kind
of problem on x86, but not on x64.

No, unchecked x86 with the verifier running.

Unfortunately, Verifier IRQL Checking is indeed less effective on x64,
compared with x86, because for example KeRaiseIrql is inlined.
Verifier
doesn’t have (yet?) a way to hook these inline function calls, and
therefore
these calls don’t get verified. Here’s the entire implementation of
KeRaiseIrql/KfRaiseIrql, from my copy of wdm.h:

KIRQL OldIrql;

OldIrql = KeGetCurrentIrql();

NT_ASSERT(OldIrql <= NewIrql);

WriteCR8(NewIrql);
return OldIrql;

So you would have to build a checked driver to catch this bug, using
the
assertion above.

Ah. That’s useful to know. Under 32 bit 2003sp2 and above, the irql is
entirely ‘soft’ though (no write to TPR register via CR8 or otherwise),
so inlining isn’t possible anyway. The write to TPR via CR8 is much
faster but is not available on Intel CPU’s in 32 bit mode (available on
AMD cpu’s in 32 bit mode via a LOCK MOVE CR0 instruction which equates
to MOVE CR8).

James

>

> If the contract says you can’t do it, and you do it anyway, is the
resulting
behaviour really a bug?

I would say “yes, it is”…

I had a feeling you might :slight_smile:

Just run a checked kernel if you have those concerns. I’m pretty sure a
checked build would include the all the contract checking you require,
with the performance to match.

Once the system has detected some “abnormality” in the KM it has to do
something about it in order to ensure that damage does not get get
spread
further , because otherwise you may end up with long-term damage to
user data
(and, probably, even to the hardware). This is why normally Windows
bugchecks whenever it encounters any “abnormality” in the KM - this is
a
standard Windows behavior in such situation.

My opinion is that a driver that relies on a free build kernel to catch
its mistakes is falling into the ‘epic fail’ category.

That said, there is at least a documentation bug for KeRaiseIrql:

“If the new IRQL is less than the current IRQL, a bug check occurs.”

That bug check doesn’t happen (verifier or not) under 32 bit 2003 sp2,
and based on a few posts on this list, would only happen under 64 bit
when the driver making the call was built in checked mode. Based on
that alone I can suddenly see a whole lot more value in running a
checked kernel… (which I assume would bug check under 32 bit, although
I haven’t tested)

If you really feel it is a bug in the behaviour rather than a bug in the
documentation then submit it to Microsoft (convincing me or anyone else
on this list probably won’t have the desired affect) and have the
argument with them, and let me know how that works out for you.

James

> My opinion is that a driver that relies on a free build kernel to catch its mistakes is falling

into the ‘epic fail’ category.

Well, this is not just “your opinion” but an objectively a complete failure - if free built catches its mistake
it is going to shut down anyway, so that the term “relies” hardly applies here. However, if a user is unlucky enough to have a piece of crap like that on his system it does not necessarily mean that he should lose his data/burn his computer/etc, and this is what kernel-level checks are for . They are not meant to be used for telling a driver " you made a mistake - please provide correct arguments" but for protecting the system from long-term damage that may get caused by crappy drivers, so that they have to be made only in few critical places. Ironically, a place we are speaking about happens to fall exactly into this category…

If you really feel it is a bug in the behaviour rather than a bug in the documentation
then submit it to Microsoft

Well, I leave it for you to submit complaints to MSFT about both implementation and documentation, but IIRC, if you arbitrarily lower IRQL under 32-bit XP you will get exactly what the doc describes …

Anton Bassov

Not all contracts are enforced all the time. You know better than to harp about that. For a moment, I’ll pretend here that you’re not trolling.

On practically any x86-like 32-bit system, nobody is going to immediately catch writing beyond a heap allocation unless you’re running in a debugging situation (page heap, valgrind, …). Similarly, there aren’t checks enabled in the release build analog in any general purpose operating systems for ways that you can blow your leg off by breaking rules within the same privilege domain.

Everything has costs to it. You need to think carefully about when it’s worth paying those costs. Checking irql contract every time the dispatcher is entered on non-checked non-driver-verifier scenarios would be expensive, and for a bug that should be very easy to catch during development, is within the same privilege domain and in a very hot code path, etc

  • S

-----Original Message-----
From: xxxxx@hotmail.com
Sent: Friday, August 14, 2009 16:03
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] function IRQL dependency

> The only problem with this approach is that it “breaks the contract” that a thread running at >DISPATCH_LEVEL cannot be preempted

Elevated IRQL is just a Windows notion of atomic context - by raising IRQL >=DPC level you tell the dispatcher “please don’t perform context switches on this CPU because doing so may have disastrous consequences, until I indicate that it can get safely done”. Consider what can happen if you hold a spinlock or run DPC routine and some other thread is schedules on CPU meanwhile. Dispatcher has no means of knowing that performing context switch at the moment is unsafe, does it. This is what the concept of elevated IRQL is all about - to let dispatcher know that it should not perform context switches at the moment. This is just a convention that system dispatcher is build around.

Although , as we can see, Windows does not seem to enforce this “contract” in some situations, it is
obviously just a bug that they will (hopefully) fix…

Anton Bassov


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Only verifier-thunked (dllimport) irql calls get that checking by default. Not irql calls in the kernel itself.

  • S

-----Original Message-----
From: James Harper
Sent: Friday, August 14, 2009 16:25
To: Windows System Software Devs Interest List
Subject: RE: [ntdev] function IRQL dependency

>
> > The only problem with this approach is that it “breaks the
contract” that a
> thread running at >DISPATCH_LEVEL cannot be preempted
>
> Elevated IRQL is just a Windows notion of atomic context - by raising
IRQL
> >=DPC level you tell the dispatcher “please don’t perform context
switches on
> this CPU because doing so may have disastrous consequences, until I
indicate
> that it can get safely done”. Consider what can happen if you hold a
spinlock
> or run DPC routine and some other thread is schedules on CPU
meanwhile.
> Dispatcher has no means of knowing that performing context switch at
the
> moment is unsafe, does it. This is what the concept of elevated IRQL
is all
> about - to let dispatcher know that it should not perform context
switches
> at the moment. This is just a convention that system dispatcher is
build
> around.
>
> Although , as we can see, Windows does not seem to enforce this
“contract” in
> some situations, it is obviously just a bug that they will (hopefully)
fix…
>

If the contract says you can’t do it, and you do it anyway, is the
resulting behaviour really a bug? This is kernel space not user space,
so it is up to the caller to check the inputs, not the callee.

That said, the docs for KeRaiseIrql say “If the new IRQL is less than
the current IRQL, a bug check occurs”, and it clearly doesn’t under 2K3,
even when the verifier is running (just tried it [1]). If the first line
in KeDelayExecutionThread was KeRaiseIrql(APC_LEVEL, &old_irql) and
KeDelayExecutionThread was called at HIGH_LEVEL, the the current IRQL
would be changed to APC_LEVEL and old_irql = HIGH_LEVEL without any fuss
at all (aside from the obvious :slight_smile:

[1] The code I tested this with, which starts at APC_LEVEL is:

KIRQL old_irql1, old_irql2;
KdPrint((" A Irql = %d\n", KeGetCurrentIrql()));
KeRaiseIrql(HIGH_LEVEL, &old_irql1);
KdPrint((" B Irql = %d\n", KeGetCurrentIrql()));
KeRaiseIrql(PASSIVE_LEVEL, &old_irql2);
KdPrint((" C Irql = %d\n", KeGetCurrentIrql()));
KeLowerIrql(old_irql2);
KdPrint((" D Irql = %d\n", KeGetCurrentIrql()));
KeLowerIrql(old_irql1);
KdPrint((" E Irql = %d\n", KeGetCurrentIrql()));

And the output was:

A Irql = 1
B Irql = 31
C Irql = 0
D Irql = 31
E Irql = 1

James


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

> For a moment, I’ll pretend here that you’re not trolling.

Look - up to this point I tried my best to avoid any criticism of any object of your adoration/worship in any possible way. If I somehow hurt your religious feelings I am really sorry for that - it just did not get into my head that one may be that sensitive…

Everything has costs to it. You need to think carefully about when it’s worth paying those costs.

I would think in terms of an answer to the following questions “Is it my last chance to prevent long-term damage to the system ? Will I be able to do it at a later stage if a need arises?” Let’s look at my earlier example of ZwXXX call by spinlock holder. There is no problem with this call until page fault is caused,
and there is no problem with page fault until failing thread has to get blocked either. Therefore,there is no need for checking IRQL either in service dispatcher or in page fault handler, because you will be able to do it in a dispatcher if a need arises. However, dispatcher is already a “last line of defence”, because if you put a spinlock holder to sleep you cannot guarantee that you will be able to avoid a deadlock…

Checking irql contract every time the dispatcher is entered on non-checked non-driver-verifier >scenarios would be expensive,

It depends on how you structure a scheduler…

When dispatcher is entered by KeXX functions IRQL has to get raised to DPC level if it not elevated.
This part is unavoidable. All you have to do is to save previous IRQL in PCR, and check it immediately
before you proceed to saving thread context… Does not seem to be that expensive, does it…

Anton Bassov

>

Not all contracts are enforced all the time.

Yes. When you dig into the docs on the checked builds, specifically
“What the Checked Build Checks”, Microsoft has this to say:

“Great care has been taken to ensure that the Windows operating system
code ordinarily executes with as little overhead as possible. As a
result, the NT-based operating systems implement the policy that all
components running in kernel-mode, including drivers, implicitly “trust”
each other. Thus, parameters that are passed from one kernel-mode
component to another (such as parameters passed on function calls) are
typically subject to minimal validation. The checked build of the
operating system enables many additional parameter validation checks.”

The way I interpret that is that if your driver tries to do the wrong
thing, then the wrong thing is going to happen.

You know better than to harp about that. For a moment, I’ll pretend
here that you’re not trolling.

A troll would be a deliberate attempt to get people’s backs up, so I
don’t think it’s fair to call Anton a troll in this case. I think that
Anton just has a different opinion about how far Windows should go to
protect itself against bad drivers.

My opinion, fwiw[1], is “that’s what backups are for” :slight_smile:

James

[1] Worth very little in the scheme of things.

> I think that Anton just has a different opinion about how far Windows should go to protect itself

against bad drivers.

Well, there are some situations when performing a check just defeats the purpose of the whole exercise, even if it gets done “at the last line of defence” - I don’t even want to argue about it. For example, if you introduce IRQL check into KeAcquireSpinlockAtDpcLevel() , it will just defeat the purpose of this function - it is designed specifically for the purpose of avoiding dealing with IRQL when it is not necessary, and is meant to be an optimization. Therefore, checking IRQL here just does not make sense, although spinlock acquisition by a caller running at low IRQL with this function may have truly disastrous consequences…

Anton Bassov

Aside from the whole issue of who’s enforcing the contract and should the OS bugcheck… And GRANTED that the OP “did something that’s against the rules and therefore gets whatever behavior they get”…

I’m STILL curious about the OP’s two original questions (not curious enough to look it up in the code, but curious nonetheless): (a) How can the IRQL that the OP prints change from 30 to 29 as shown… and (b) how DOES the OS properly carry on in the case when we’ve context switched a thread at elevated IRQL?

Certainly, we can ignore the issue that we model IRQLs as software interrupts, with the dispatcher running at IRQL DISPATCH_LEVEL – There may certainly be cases where this model isn’t strictly reality (I mean, KiReadyThread and KiSwapThreads are what do the actually scheduling).

But, one would THINK that at some point the OS… or another thread that’s scheduled… would rather quickly fall into a hole due to the unexpectedly high IRQL at which it’s running. And yet, everything appears fine and we return to the errant thread and are still running at IRQL HIGH_LEVEL. Hmmmm…

Perhaps nothing much is happening on this system, and when the errant thread blocks only the idle thread runs??

And what of this change from 30 to 29??

We can debate whether there should be checks in the OS all day long, but I suggest that’s a boring and semi-religious debate.

I’m much more curious about what, architecturally, is happening that allows this to work without (it would appear) any negative consequences.

Peter
OSR

>

Aside from the whole issue of who’s enforcing the contract and should
the OS
bugcheck… And GRANTED that the OP “did something that’s against the
rules
and therefore gets whatever behavior they get”…

I’m STILL curious about the OP’s two original questions (not curious
enough to
look it up in the code, but curious nonetheless): (a) How can the
IRQL that
the OP prints change from 30 to 29 as shown… and (b) how DOES the OS
properly carry on in the case when we’ve context switched a thread at
elevated
IRQL?

If the first thing the DelayExecutionThread does is ‘KeRaiseIrql’ to
APC_LEVEL, then Irql gets lowered from HIGH_LEVEL to APC_LEVEL and
things work as expected (any atomicity that the caller expected to have
aside)

And what of this change from 30 to 29??

I just noticed a crashdump running at 29 instead of 30. Haven’t
investigated why yet… for a TPR based IRQL system, 29 and 30 are
functionally equivalent anyway I think (stretching my memory a bit -
could be completely wrong).

James

> I’m much more curious about what, architecturally, is happening that allows this to work

without (it would appear) any negative consequences.

Apparently, dispatcher saves current IRQL somewhere in ETHREAD ( it has to do so in order to ensure that it makes a distinctions between passive and APC levels when thread gets CPU back), saves execution context, sets IRQL to DPC level, and proceeds to the idle thread that CPU will dispatch until timer interrupt queues a DPC that selects a new thread to get scheduled. When thread eventually gets re-scheduled, it will restore IRQL before returning control to a caller

Therefore, IRQL does not seem to make any difference here, once idle thread runs at DPC level anyway. If it ran at caller’s IRQL the whole thing would deadlock…

This is the very first explanation that gets into one’s head…

If the OP tried to do it from DPC routine things would be, apparently, very different, but once he does it from a “regular” the whole thing works flawlessly…

Anton Bassov

wrote in message news:xxxxx@ntdev…
>…sets IRQL to DPC level, and proceeds to the idle thread that CPU will
>dispatch until timer interrupt queues a DPC that selects a new thread to
>get scheduled. When thread eventually gets re-scheduled, it will restore
>IRQL before returning control to a caller

That’s not how things work. You need to stop thinking as the scheduler as a
separate unit of execution (which needs to queue DPCs or whatever). Peter
Wieland has a good article on his blog which explains better how the
scheduler works. The scheduler executes if a timer interrupt occurs, if a
pagefault is hit or whenever a thread calls into one of the Ke functions.
The scheduler is not executed by a separate unit or execution but in the
context of the thread which calls one of the wait (Ke) functions. At that
point it obtains the dispatcher lock (hence the ‘raise’ to DISPATCH_LEVEL)
and decides what to do with the current thread, add it to the list of
waiters for a dispatcher object and switch to another thread or continue.

As we have already seen in other discussion, the scheduler does not care
about IRQLs but it needs to save and restore IRQL as well as other register
information upon suspending and resuming a thread.

BTW what is true for KeDelayExecutionThread is true for all the wait
functions, you can do them at IRQL>=DISPATCH_LEVEL provided there are no
sanity checks done by the verifiers.

//Daniel

> That’s not how things work.

Actually, IIRC, this is exactly how they do. More on it below…

You need to stop thinking as the scheduler as a separate unit of execution (which needs
to queue DPCs or whatever).

“Scheduler” is a generic term describing a set of routines that deal with thread scheduling. There are different scheduler-related tasks that may get executed in different contexts, and selecting a thread to run on CPU is just one of them. Some of these tasks are implemented in KeXXX functions, but, IIRC, a _ particular_ task of scheduling a new thread is done only in DPC routine…

When a thread voluntarily yields the CPU with any of KeXXX functions, a new thread is not immediately get scheduled on the CPU. Instead, execution proceeds to an “idle thread” that proceeds low-priority DPCs, because CPU has nothing else to do until next clock tick… Every CPU has its own idle thread,
and the collection of these threads is know as “Idle” process with PID of 0. . When timer interrupt occurs it queues a lowest-priority DPC that will actually select new “non-idle” thread to run on CPU.

The scheduler is not executed by a separate unit or execution but in the context of the thread
which calls one of the wait (Ke) functions.

This is just one possible scenario when scheduler gets entered. In addition to that, it can get entered
from DPC routine (for example,when KeSetEvent() or even KeWaitXXX with zero timeout are called).
It is understandable that it in the latter scenario context switch cannot occur, but, as you must have already understood, switching contexts is just one of scheduler’s tasks…

At that point it obtains the dispatcher lock (hence the ‘raise’ to DISPATCH_LEVEL) and decides
what to do with the current thread, add it to the list of waiters for a dispatcher object and
switch to another thread or continue.

Actually, dispatcher lock acquisition raises IRQL to SYNCH_LEVEL and not to DISPATCH_LEVEL.
On UP system these two are the same, but on MP one SYNCH_LEVEL is above all DIRQLs.
Please note that only relatively small parts of scheduler’s code are executed under protection
of dispatcher spinlock, although practically all its code runs at DISPATCH_LEVEL to ensure that dispatcher does not re-enter itself …

In any case, if it decides to yield execution, it is not going to proceed to the new thread straight away, and, instead, wait for a next clock tick processing an “idle” thread meanwhile…

Anton Bassov

wrote in message news:xxxxx@ntdev…
> In any case, if it decides to yield execution, it is not going to proceed
> to the new thread straight away, and, instead, wait for a next clock tick
> processing an “idle” thread meanwhile…
>

Yeah so that would be a waste of time and not how Windows works. The context
switch takes place immediately and is not held until next clock tick.
Fortunately we do not need to guess about this but look these things up in
books such as Russinovich. Doing this, it appears you were right about the
dispatcher lock raising to SYNCH_LEVEL but on earlier kernels, SYNCH_LEVEL
has a value of 2 which equals DISPATCH_LEVEL.

//Daniel

>Yeah so that would be a waste of time and not how Windows works.

Yes, I know it seems to be unreasonable at the first glance…

The thing is, the OS has no concept of time in between clock ticks. Therefore, if you schedule a new thread in between ticks, then CPU may get taken away from it within couple of instructions when timer interrupt occurs, but from the scheduler’s perspective, a thread has used up running entire 15 ms - the OS just has no means of knowing it. Tacking into account that default thread quantum on a workstation is just 2 clock ticks under Windows, this “inaccuracy” is just too significant to afford - it would result in inaccurately recording of how threads use their time, which, in turn, would result in errors with dynamically adjusting thread priorities.

This is why they do it upon timer interrupt. In order to avoid wasting time they dispatch low-priority DPCs
meanwhile…

on earlier kernels, SYNCH_LEVEL has a value of 2 which equals DISPATCH_LEVEL

Not on “earlier” kernels but on UP ones - this is the question of MP support…

Anton Bassov

> When a thread voluntarily yields the CPU with any of KeXXX functions, a new thread is not

immediately get scheduled on the CPU.

Wrong.

If there are ready for execution threads, then the next one is picked and is switched to immediately.

No sane OS designer will allow the CPU to be idle if there are ready threads just due to some clock ticks, this is just idiotic design which is not used.

Instead, execution proceeds to an “idle thread” that proceeds low-priority DPCs, because CPU has
nothing else to do until next clock tick…

Wrong.

The importance of clock ticks for the scheduler is overestimated a lot.

Actually, they play the role only if:
a) KTIMER is signaled by the tick, waking some threads and
b) quantum end - the thread was running on the CPU for more then 4 (or 8?) clock ticks. This latter situation occurs only on 100% CPU load and is only processed to avoid starving other threads.

These are the only 2 things where timers are important for scheduling.

and the collection of these threads is know as “Idle” process with PID of 0. . When timer interrupt
occurs it queues a lowest-priority DPC that will actually select new “non-idle” thread to run on CPU.

Not so. Idle thread is never executed if there are runnable threads.

of dispatcher spinlock, although practically all its code runs at DISPATCH_LEVEL to ensure that
dispatcher does not re-enter itself …

Funny non-guarantee. On MP, only spinlock can provide such a guarantee, just raising IRQL cannot. So, looks like the dispatcher either runs on caller’s IRQL or with the dispatcher lock held.

In any case, if it decides to yield execution, it is not going to proceed to the new thread straight away,
and, instead, wait for a next clock tick processing an “idle” thread meanwhile…

100% wrong.

Idle thread is only executed if there are no runnable threads in the system at all (at least with affinity mask which permits this CPU).


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

> The thing is, the OS has no concept of time in between clock ticks.

Note that scheduler does not need the concept of time :slight_smile: just plain and simple.

It only needs the concept of time for 2 things a) to signal KTIMERs and b) to process the rare special situation of the quantum end.

In all other cases, scheduler is not time-driven.

Therefore, if you schedule a new thread in between ticks,

…then it is executed immediately.

then CPU may get taken away from it within couple of instructions when timer interrupt occurs,

Who cares?

but from the scheduler’s perspective, a thread has used up running entire 15 ms

Who cares?

  • the OS just has no means of knowing it. Tacking into account that default thread quantum on a
    workstation is just 2 clock ticks under Windows,

Quantum is only an upper limit. No one desires to really allow the thread to run for the whole huge 8 tick period.

this “inaccuracy” is just too significant to afford

Absolutely insignificant. It only introduces some short-run inaccuracy to GetThreadTimes, but who cares? on a long run, GetThreadTimes is accurate.

  • it would result in inaccurately recording of how threads use their time,

Who cares?

which, in turn, would result in errors with dynamically adjusting thread priorities.

There is no such thing in Windows :slight_smile:

There are only:

a) Boost parameter to KetSetEvent and IoCompleteRequest - the latter is used in KeSetEvent(Irp->UserEvent), and only for it.

b) Starvation prevention - forcibly raising the priority of low-priority thread if it was on the ready queue for a very long time (1 second or so) and was not executed.

Windows does not adjust the thread priorities dynamically besides these points.

Some UNIXen IIRC used the heuristic “if the thread yielded itself - then it is OK, but, if it was preempted by the quantum end - then let’s lower its priority a bit”. Windows is not such, instead, it relies on boosts in KeSetEvent.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

> If there are ready for execution threads, then the next one is picked and is switched to immediately.

No sane OS designer will allow the CPU to be idle if there are ready threads just due
to some clock ticks, this is just idiotic design which is not used.

Oh, I see…

What about outstanding DPCs in a queue??? They have to be executed anyway, and, probably, at less convenient time if it is not done now. Furthermore, don’t forget that DPC in a queue may signal an event that will affect the choice of a thread to run.

Therefore, flushing DPC queue upon yielding CPU seems to be quite reasonable approach, and this is what idle thread is good for. Please note that “idle” is just a term - it does not mean that CPU issues HLT and stops dead in its tracks until next interrupt…

Note that scheduler does not need the concept of time :slight_smile: just plain and simple.

This is a true “masterpiece” - no more comments needed…

There are only: a) Boost parameter to KetSetEvent and IoCompleteRequest - the latter is
used in KeSetEvent(Irp->UserEvent), and only for it. b) Starvation prevention - forcibly raising
the priority of low-priority thread if it was on the ready queue for a very long time (1 second or so)
and was not executed. Windows does not adjust the thread priorities dynamically besides these points.

Really??? What about the foreground window ending its wait??? What about GUI thread after wake up ?? Does it have to wait until process in the background (probably, with no user interaction at all) works out its quantum??? What about ending wait on executive events??? There are 5 reasons why thread may get a priority boos, but you somehow mention only two of them.

Quantum is only an upper limit. No one desires to really allow the thread to run for the
whole huge 8 tick period.

But if it yields the CPU voluntarily its remaining quantum gets saved, and thread gets subsequent short-term priority boost. However, once you “forgot” about this part, no wonder you overlooked the importance of quantum and timing…

Anton Bassov

>> Note that scheduler does not need the concept of time :slight_smile: just plain and simple.

This is a true “masterpiece” - no more comments needed…

Certainly one can imagine a scheduler that acted only on thread yields
or interrupt state change and did not concern itself with clock ticks.
It might not be the best scheduler, but I think Maxim has a point
here.

Mark Roddy

Actually there are a number of old real-time OS’es which did just that. So
before you slam the concept understand the goals of the system, it makes
sense in some cases.


Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr

wrote in message news:xxxxx@ntdev…
>> If there are ready for execution threads, then the next one is picked and
>> is switched to immediately.
>> No sane OS designer will allow the CPU to be idle if there are ready
>> threads just due
>> to some clock ticks, this is just idiotic design which is not used.
>
>
> Oh, I see…
>
>
>
> What about outstanding DPCs in a queue??? They have to be executed anyway,
> and, probably, at less convenient time if it is not done now. Furthermore,
> don’t forget that DPC in a queue may signal an event that will affect the
> choice of a thread to run.
>
> Therefore, flushing DPC queue upon yielding CPU seems to be quite
> reasonable approach, and this is what idle thread is good for. Please note
> that “idle” is just a term - it does not mean that CPU issues HLT and
> stops dead in its tracks until next interrupt…
>
>
>> Note that scheduler does not need the concept of time :slight_smile: just plain and
>> simple.
>
> This is a true “masterpiece” - no more comments needed…
>
>
>
>
>
>
>> There are only: a) Boost parameter to KetSetEvent and IoCompleteRequest -
>> the latter is
>> used in KeSetEvent(Irp->UserEvent), and only for it. b) Starvation
>> prevention - forcibly raising
>> the priority of low-priority thread if it was on the ready queue for a
>> very long time (1 second or so)
>>and was not executed. Windows does not adjust the thread priorities
>>dynamically besides these points.
>
> Really??? What about the foreground window ending its wait??? What about
> GUI thread after wake up ?? Does it have to wait until process in the
> background (probably, with no user interaction at all) works out its
> quantum??? What about ending wait on executive events??? There are 5
> reasons why thread may get a priority boos, but you somehow mention only
> two of them.
>
>> Quantum is only an upper limit. No one desires to really allow the
>> thread to run for the
>> whole huge 8 tick period.
>
> But if it yields the CPU voluntarily its remaining quantum gets saved, and
> thread gets subsequent short-term priority boost. However, once you
> “forgot” about this part, no wonder you overlooked the importance of
> quantum and timing…
>
>
> Anton Bassov
>
>
>
>
> Information from ESET NOD32 Antivirus, version of virus
> signature database 4341 (20090817)

>
> The message was checked by ESET NOD32 Antivirus.
>
> http://www.eset.com
>
>
>

Information from ESET NOD32 Antivirus, version of virus signature database 4341 (20090817)

The message was checked by ESET NOD32 Antivirus.

http://www.eset.com