Are callbacks guaranteed to run consecutively?

> the 386 was issued in 1985.

Although protected-mode x86 architecture is, indeed, is based upon 386 (and real-mode one is right on 8086), it does not mean that all modern x86 processors are still the same good old 386. As far as out discussion about APIC is concerned, it is just 10- year-old technology - IIRC, support for APIC turned up only on PentiumPro. What you propose is pretty much moving back to PIC, i.e. back to the architecture where interrupt priority is implied by IRQ - you want to remove all flexibility that APIC offers. This is why I said that the thing you propose is a huge step backwards (BTW, you can do it purely in software - just disable APIC via IA32_APIC_BASE MSR, and you will be unable to make any use of it until CPU reset…)

Anton Bassov

> We have reached a stage in which vastly superior kernel developers of the new age

have risen thus far above that they provoke nothing but jealousy from the tarnished
MVPs and their flock. Coincidentially it’s exactly the right time of year, so I suggest
you should just go claim and proclaim your awards with ablution. Their tarnishment,
the deficit of their pageantry, well … “as long as they know for them it’s just dance,
smile or quit”. Happy new year guys.

Absolutely pointless, unmotivated and totally stupid attack …

Anton Bassov

The APIC is incidental to this thread. Shared memory multiprocessor
technology predates the APIC by quite a few years. The PIC was a simple
minded device from the hobbyist era, but the APIC as a technology doesn’t
over much more than your typical 60’s or 70’s mainframe alread had.

Machines today are big and fast, but their architecture is old. Sorry,
Anton, I for myself do not believe that our machines today are that much
better from what they used to be 30 years ago. And the age tag on our OS’s
matches. The machines are bigger and faster, but the credit goes to the
solid state guys and to the electronic engineers! Excepting for Object
Orientation, we’re still programming the way we did back in the 70s.

Le plus ca change, le plus c’est la meme chose…

Alberto.

----- Original Message -----
From:
To: “Windows System Software Devs Interest List”
Sent: Monday, December 31, 2007 8:52 PM
Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?

>> the 386 was issued in 1985.
>
> Although protected-mode x86 architecture is, indeed, is based upon 386
> (and real-mode one is right on 8086), it does not mean that all modern x86
> processors are still the same good old 386. As far as out discussion about
> APIC is concerned, it is just 10- year-old technology - IIRC, support for
> APIC turned up only on PentiumPro. What you propose is pretty much moving
> back to PIC, i.e. back to the architecture where interrupt priority is
> implied by IRQ - you want to remove all flexibility that APIC offers. This
> is why I said that the thing you propose is a huge step backwards (BTW,
> you can do it purely in software - just disable APIC via IA32_APIC_BASE
> MSR, and you will be unable to make any use of it until CPU reset…)
>
> Anton Bassov
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

> I for myself do not believe that our machines today are that much better

from what they used to be 30 years ago. And the age tag on our OS’s matches.
> The machines are bigger and faster, but the credit goes to the solid state
guys and to the electronic engineers!

From purely conceptual point of view, I fully agree with you here - indeed, as it has already been mentioned earlier on this thread, all OS concepts that we use today seem to be originated in 60s-70, so that you are very unlikely to invent something that is basically new and unheard of in computer science. The thing is, advances in electronics just made it possible for hardware and OS designers to bring the concepts that in not-so-distant past could be implemented only on the mainframe to the world of PCs and even more primitive devices. Let’s face it - when it comes to computing power, a modern mobile phone beats early 70s mainframe hands down…

However, it has nothing to do with our discussion of interrupt queuing. As it follows from a discussion of splx() on early UNICes earlier on this thread, OS software still had a chance to disable delivery of interrupts below some certain priority, i.e. some logical equivalent of TPR was still present on these hardware architectures. The only reason why it did not exists on PC back then is because the state of micro-electronics just did not allow it at the moment - otherwise, 386 would have undoubtedly provided support for APIC, as well as for logical multiprocessing and 64-bit memory addressing. This is why I say that the thing you propose is a step 30 years back…

Anton Bassov

Attack ? This is not an attack. But then, there is no need for you or anyone
to become my friend for me to recognize his superiority. Generally speaking,
the nasty ones are often the people with something interesting to say.

/Daniel

wrote in message news:xxxxx@ntdev…

> …unmotivated …attack …
>
> Anton Bassov
>

> the nasty ones are often the people with something interesting to say

Well, judging from your previous post, you are an exception to the above “rule” - although you did not say anything particularly interesting in it, it was quite nasty. You have referred to absolutely all MVPs in derogatory terms, based solely upon the fact that they hold MVP status. Although quite often I do disagree with what MVPs say (Don is the one with whom I argue most of all), I still believe that all DDK MVPs without any exception are extremely knowledgeable professionals with the areas of expertize that goes well beyond Windows kernel. Furthermore, if you read this NG on regular basis, you would have noticed that some MVPs tend to express very independent views that often contradict MSFT position. BTW, Mark Russinovich used to be MVP before he had joined MSFT. Do you really think that Mark Russinovich did not contribute anything to the community???

Anton Bassov

Hi,

I guess we should not be surprised that little has changed, the development
of mass computing has to be evolutionary not revolutionary. The slow
incremental updates to hardware and OSes have all been tentative, with a
view to backward compatibility. History has shown that most times there has
been a bold change, it has failed.

I am frequently sarcastic about MS, but I think they are big enough to take
it. I wouldn’t want their burdon of backwards compatibility every time they
release an OS or update it. I suppose this is why we heard talk of
revolutionary change with Vista, but it seems pretty much the same to me. I
suppose the shock change here is mostly the change not to give unfettered
access to any application and user to corrupt the system. I like this
revolution, but it has hurt in making a lot of things break, including our
methodology for development.

I only wanted to disagree that OS design is back in the 70’s. This was when
I did my degree and I learnt sensible stuff that has not even made it into
mass market OSes yet, so we have yet to catch up with the state of the
1970’s art. Of course, multiprocessing was the hot topic then, so mass
hardware is beginning to catch up. But even then we learned about degrees of
trust in the OS.

I am still amazed that Windows has only two levels of trust in the kernel,
i.e. none and absolute. If I could influence MS it would be to change this.
Let’s add degrees of trust to the kernel programming model. The OS has to
trust itself absolutely, and its core drivers. But 3rd party drivers should
be able to add kernel features with a lower level of trust, and be able to
fail without taking down the OS.

The user side has already started to develop some levels of trust,
especially with the Vista feature mentioned above, but it’s the kernel side
where most installations fall down. I don’t know if it is possible to have a
stable system unless every component is from MS. And even then it may not be
stable, but at least you know who is responsible to fix things!

Happy new year to all…

Mike

----- Original Message -----
From: xxxxx@hotmail.com
To: Windows System Software Devs Interest List
Sent: Wednesday, January 02, 2008 2:22 AM
Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?

I for myself do not believe that our machines today are that much better
from what they used to be 30 years ago. And the age tag on our OS’s
matches.
> The machines are bigger and faster, but the credit goes to the solid
state
guys and to the electronic engineers!

From purely conceptual point of view, I fully agree with you here - indeed,
as it has already been mentioned earlier on this thread, all OS concepts
that we use today seem to be originated in 60s-70, so that you are very
unlikely to invent something that is basically new and unheard of in
computer science. The thing is, advances in electronics just made it
possible for hardware and OS designers to bring the concepts that in
not-so-distant past could be implemented only on the mainframe to the world
of PCs and even more primitive devices. Let’s face it - when it comes to
computing power, a modern mobile phone beats early 70s mainframe hands
down…

However, it has nothing to do with our discussion of interrupt queuing. As
it follows from a discussion of splx() on early UNICes earlier on this
thread, OS software still had a chance to disable delivery of interrupts
below some certain priority, i.e. some logical equivalent of TPR was still
present on these hardware architectures. The only reason why it did not
exists on PC back then is because the state of micro-electronics just did
not allow it at the moment - otherwise, 386 would have undoubtedly provided
support for APIC, as well as for logical multiprocessing and 64-bit memory
addressing. This is why I say that the thing you propose is a step 30 years
back…

Anton Bassov


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

“Mike Kemp” wrote in message news:xxxxx@ntdev…
> I only wanted to disagree that OS design is back in the 70’s. This was
> when I did my degree and I learnt sensible stuff that has not even made it
> into mass market OSes yet, so we have yet to catch up with the state of
> the 1970’s art. Of course, multiprocessing was the hot topic then, so mass
> hardware is beginning to catch up. But even then we learned about degrees
> of trust in the OS.
>
> I am still amazed that Windows has only two levels of trust in the kernel,
> i.e. none and absolute. If I could influence MS it would be to change
> this. Let’s add degrees of trust to the kernel programming model. The OS
> has to trust itself absolutely, and its core drivers. But 3rd party
> drivers should be able to add kernel features with a lower level of trust,
> and be able to fail without taking down the OS.
>
Mike,

You and I are of similar eras. The problem with the multiple levels of
trust
was the ring approach of Multic’s and some other systems. I found it very
interesting that in 1979, the was a class offerred by several of the leaders
of
Multic’s which said if there was one thing that was done wrong it was rings.
Unfortunately, the rings versus two levels obscured things like capability
based
systems and is still with us today. I still see people arguing that Windows
should use the rings of the x86 design.


Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply

On Jan 2, 2008 2:49 PM, Don Burn wrote:
> systems and is still with us today. I still see people arguing that Windows
> should use the rings of the x86 design.

I’ve always been curious as to why only ring 0 and 3 IIRC are utilised
by WinNT.

OTOH I do not fully understand what code would run in ring 1 and 2.
Stuff that resides in the kernel usually touches hardware or is there
for performance reasons. E.g. the GDI was moved to the kernel (NT4),
from Mark Russinovich discussion on the subject (“Inside Windows
2000”), I got the distinct impression that putting GDI elsewhere hurts
performance (at least on SMP configurations) and for most users would
have no meaningful stability benefits (if the screen goes blank,
you’re pretty much fubar in any case).

For me personally, the biggest culprit BSOD-wise so far has been
drivers from Creative Labs. I’d love to run these outside the kernel,
but seeing as they touch my hardware they would still likely cause
problems, no? I also have problems with nVidia’s ethernet drivers (for
the nForce Pro chipset). Again… Too close to the hardware. OK, they
might not infect the kernel from ring 1, but could still freeze the
hardware. These drivers simply do not belong on any system, regardless
of which ring they’re put into. :stuck_out_tongue:

(btw, I’m mostly asking – I’m not claiming my assertions are 100% accurate)


Rune

Only two rings were used since Windows NT ran on a number of CPU’s including
ones that did not have rings.

Much of the challenge of multiple protection levels has been the cost of
crossing the boundary. OS designers spend a lot of time making things fast
(consider the spin lock arguments in this thread) and then if you have a
major cost to call to the next level of trust to acquire such a lock you are
hurting. IIRC the GDI move was to increase performance.


Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply

“Rune Moberg” wrote in message news:xxxxx@ntdev…
> On Jan 2, 2008 2:49 PM, Don Burn wrote:
>> systems and is still with us today. I still see people arguing that
>> Windows
>> should use the rings of the x86 design.
>
> I’ve always been curious as to why only ring 0 and 3 IIRC are utilised
> by WinNT.
>
> OTOH I do not fully understand what code would run in ring 1 and 2.
> Stuff that resides in the kernel usually touches hardware or is there
> for performance reasons. E.g. the GDI was moved to the kernel (NT4),
> from Mark Russinovich discussion on the subject (“Inside Windows
> 2000”), I got the distinct impression that putting GDI elsewhere hurts
> performance (at least on SMP configurations) and for most users would
> have no meaningful stability benefits (if the screen goes blank,
> you’re pretty much fubar in any case).
>
> For me personally, the biggest culprit BSOD-wise so far has been
> drivers from Creative Labs. I’d love to run these outside the kernel,
> but seeing as they touch my hardware they would still likely cause
> problems, no? I also have problems with nVidia’s ethernet drivers (for
> the nForce Pro chipset). Again… Too close to the hardware. OK, they
> might not infect the kernel from ring 1, but could still freeze the
> hardware. These drivers simply do not belong on any system, regardless
> of which ring they’re put into. :stuck_out_tongue:
>
> (btw, I’m mostly asking – I’m not claiming my assertions are 100%
> accurate)
>
> –
> Rune
>

Mr. Burn is, as usual, correct.

Instead of using MORE hardware protection (as a previous poster advocated) one ongoing trend is to use LESS. Not just doing away with the use of rings, but actually doing away with the whole idea of hardware process isolation.

See: http://en.wikipedia.org/wiki/Singularity_(operating_system) and references therein, for a project in which I am personally particularly interested. As an interesting aside: You write the drivers for this OS in C#.

Peter
OSR

> Unfortunately, the rings versus two levels obscured things like capability

based
systems and is still with us today. I still see people arguing that Windows
should use the rings of the x86 design.

Rings make the OS unportable since not all CPUs support them.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

> Only two rings were used since Windows NT ran on a number of CPU’s

including ones that did not have rings.

This is not only the question of rings. When it comes to page-level protection, x86 does not make any distinction between rings 1,2 and 3 - the only thing it allows is marking pages as supervisor-only. In practice this means that, as long as ring1/ ring2 code segments are mapped to the same 4G address space with ring3 code, ring3 code will be able to access memory that is supposed to be accessible only by ring1/ ring2 code. Therefore, in order to solve this problem, the OS would have to take the full advantage of segmentation, i.e. provide different base addresses and limits for code, data and stack segments for ring3 and ring1/ring2.

However, not all processors provide a support for segmentation. It proved to be so worthless that even x64 almost completely abandoned it - when it runs in 64-bit mode it ignores base addresses and limits of all segments, apart from FS and GS (i.e. the ones that are actually used by respectively Windows and Linux). Therefore, an OS that relies upon multiple-ring protection is going to be unportable even across x86 and x64, let alone other CPU architectures…

Anton Bassov

Are the chip makers and the OS makers talking to each other? From this it
sounds like there is a little ad hoc interaction at best.

I think most of us who get near the hardware know that hardware can help us.

A minimal hardware support for an OS might be (a) virtual to real
address(incl i/o) mapping with boundaries so the OS can compartmentalise
each module, and (b) preventing any config changes without the OS vetting
them.

Failing this, allowing only managed software to run so the compiler or
interpreter vets every operation would be equally secure. (Should be most
flexible if the chips are fast enough!)

Surely with this you could have properly managed levels of trust, as many as
you like. The basic OS can do anything, every other module can be granted
the minimum rights it needs. My USB driver does not need to reformat the
system disk so should not be allowed to.

But I’m sure the OS makers know a lot more about this than I do, I guess
we’re just waiting for the moment when change becomes commercially
desirable. As long as we accept the primitive stuff we have now, why should
anyone bother when you can reboot/reinstall?

atb

Mike

>>>----- Original Message -----
From: xxxxx@hotmail.com
To: Windows System Software Devs Interest List
Sent: Friday, January 04, 2008 7:36 AM
Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?

Only two rings were used since Windows NT ran on a number of CPU’s
including ones that did not have rings.

This is not only the question of rings. When it comes to page-level
protection, x86 does not make any distinction between rings 1,2 and 3 - the
only thing it allows is marking pages as supervisor-only. In practice this
means that, as long as ring1/ ring2 code segments are mapped to the same 4G
address space with ring3 code, ring3 code will be able to access memory
that is supposed to be accessible only by ring1/ ring2 code. Therefore, in
order to solve this problem, the OS would have to take the full advantage of
segmentation, i.e. provide different base addresses and limits for code,
data and stack segments for ring3 and ring1/ring2.

However, not all processors provide a support for segmentation. It proved to
be so worthless that even x64 almost completely abandoned it - when it runs
in 64-bit mode it ignores base addresses and limits of all segments, apart
from FS and GS (i.e. the ones that are actually used by respectively Windows
and Linux). Therefore, an OS that relies upon multiple-ring protection is
going to be unportable even across x86 and x64, let alone other CPU
architectures…

Anton Bassov


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Doesn’t make a lot of sense to me to hobble an OS because of portability
considerations that only affect a tiny minority of machines. The i386
architecture was designed around a segmentation model, and it took over 25
years for people to give up on a segmented architecture, only to find out
that had they stuck to it, most of the current issues with security and
viruses would be automatically handled by the hardware.

But now that people don’t see anything but RISC, a lot of good functionality
goes down the tubes, and segmentation is sacrificed I don’t really know in
the name of what. If segmentation was properly used by the OS, mind you, we
wouldn’t have issues with viruses and malware.

And I have no sympathy to the portability argument: if you want a minimalist
CPU, you should be ready to pay the price and to accept that some OS’s might
want to implement premium functionality that requires hardware facilityies
that your CPU doesn’t have. If you designed your CPU to run Linpack as fast
as possible, well, so be it, serve you right - but it’s not a good attitude,
as I see it, to demand that the rest of us follow suit and not implement
nice software facilities just because your hardware cannot handle them.

For well over 25 years I have held the opinion that i/o does not belong in
the trusted ring. The way I see it, the ideal operation of an OS would have
a core system at Ring 0 - preferably in the motherboard firmware and
exporting an API to the rest of the world - while kernel-side i/o drivers
should all be implemented in Ring 1. Ring 2 should be reserved for user-side
drivers and for services, and Ring 3 for applications. What, your hardware
can’t handle it ? Boo hoo hoo, too bad.

Alberto.

----- Original Message -----
From: “Maxim S. Shatskih”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Wednesday, January 02, 2008 5:12 PM
Subject: Re:[ntdev] Are callbacks guaranteed to run consecutively?

>> Unfortunately, the rings versus two levels obscured things like
>> capability
>> based
>> systems and is still with us today. I still see people arguing that
>> Windows
>> should use the rings of the x86 design.
>
> Rings make the OS unportable since not all CPUs support them.
>
> –
> Maxim Shatskih, Windows DDK MVP
> StorageCraft Corporation
> xxxxx@storagecraft.com
> http://www.storagecraft.com
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

hi,

when I first saw segmented cpus - I think it was an 8080 in 1979 or 1980, the segmented architecture really seemed to be a good concept. Memory size at that time was 64 KB, the maximal size of a segment was 64 KB too. But a little time later system memory began to grow, but the segment size not. And things began to really cost headaches because one had to design the software for “near calls”, “far calls”, small arrays, large arrays and so on. I think a segemented cpu is only usable if the maximum segment size is as large as the memory in the target machine.

– Reinhard

>later system memory began to grow, but the segment size not. And things began

to really cost headaches because one had to design the software for “near
calls”,
“far calls”, small arrays, large arrays and so on.

…MakeProcInstance, SS != DS issue in DLLs and so on.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

Maxim S. Shatskih wrote:

> later system memory began to grow, but the segment size not. And things began
> to really cost headaches because one had to design the software for “near
>
calls”,

> “far calls”, small arrays, large arrays and so on.
>

…MakeProcInstance, SS != DS issue in DLLs and so on.

But, as has been pointed out before, this argument is pointing out the
flaws of one particular implementation of segmentation, and one
particular operating system’s handling of that implementation. You
cannot extrapolate from that into a dismissal of the entire segmentation
concept. If you want to dismiss 8086 segmentation, that’s fine, but do
not allow yourself to believe that it was the only possible
implementation of segmentation.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> Doesn’t make a lot of sense to me to hobble an OS because of portability considerations…

I am afraid this problem is just inevitable on any loosely-coupled architecture, i.e when hardware and OS comes from different manufacturers. From purely commercial point of view it makes a perfect sense to design an OS that is capable of running on any hardware platform, although from the technical one it is, certainly, better to target some specific hardware so that you can squeeze every ounce of performance out of the platform. It is understandable that when it comes to implementation, business would choose more commercially-viable solution, rather than technically superior one. This is why no matter how far a progress in micro- electronics advances, the PC has absolutely no change that reach the levels of efficiency and reliability that tightly-coupled architectures offer…

i t took over 25 years for people to give up on a segmented architecture, only to
find out that had they stuck to it, most of the current issues with security and viruses
would be automatically handled by the hardware.

Fully agree with you here. For example, there is no objective need for ExecuteDisable bit in PTE, because any OS that takes the full advantage of segmentation is automatically immune to buffer overun attacks that ExecuteDisable feature is meant to prevent . Therefore, Intel introduced this feature only because all major OSes rely upon flat memory model…

Anton Bassov

Segments that stretched over the full 4Gb of virtual addressing space were
introduced with the i386. Which is already 30 years old. Which allowed the
“flat” OS architecture that’s used today in both Windows and Linux.

Not to indulge in a personal jab, but I still marvel, after all these years,
how little people know about the i386 architecture. It’s the exception
rather than the rule to hear someone criticizing the i386 architecture from
a position of expert knowledge! I suggest you read the Intel 386 Operating
Systems Writer Guide, if you can still find it somewhere - it may be out of
print. In there, they have a few suggestions for Operating Systems
structure, which are rather different from both Windows and Unix/Linux. Some
of us have toyed of coming up with an OS that would exploit the full power
of the architecture, but somehow it never happened.

Alberto.

----- Original Message -----
From:
To: “Windows System Software Devs Interest List”
Sent: Friday, January 04, 2008 9:26 AM
Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?

> hi,
>
> when I first saw segmented cpus - I think it was an 8080 in 1979 or 1980,
> the segmented architecture really seemed to be a good concept. Memory size
> at that time was 64 KB, the maximal size of a segment was 64 KB too. But a
> little time later system memory began to grow, but the segment size not.
> And things began to really cost headaches because one had to design the
> software for “near calls”, “far calls”, small arrays, large arrays and so
> on. I think a segemented cpu is only usable if the maximum segment size is
> as large as the memory in the target machine.
>
> – Reinhard
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer