Are callbacks guaranteed to run consecutively?

Portability is more important, so is POSIX compliance, and I have a strong
suspect POSIX contradicts segments.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

“Alberto Moreira” wrote in message news:xxxxx@ntdev…
> Segments that stretched over the full 4Gb of virtual addressing space were
> introduced with the i386. Which is already 30 years old. Which allowed the
> “flat” OS architecture that’s used today in both Windows and Linux.
>
> Not to indulge in a personal jab, but I still marvel, after all these years,
> how little people know about the i386 architecture. It’s the exception
> rather than the rule to hear someone criticizing the i386 architecture from
> a position of expert knowledge! I suggest you read the Intel 386 Operating
> Systems Writer Guide, if you can still find it somewhere - it may be out of
> print. In there, they have a few suggestions for Operating Systems
> structure, which are rather different from both Windows and Unix/Linux. Some
> of us have toyed of coming up with an OS that would exploit the full power
> of the architecture, but somehow it never happened.
>
>
> Alberto.
>
>
> ----- Original Message -----
> From:
> To: “Windows System Software Devs Interest List”
> Sent: Friday, January 04, 2008 9:26 AM
> Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?
>
>
> > hi,
> >
> > when I first saw segmented cpus - I think it was an 8080 in 1979 or 1980,
> > the segmented architecture really seemed to be a good concept. Memory size
> > at that time was 64 KB, the maximal size of a segment was 64 KB too. But a
> > little time later system memory began to grow, but the segment size not.
> > And things began to really cost headaches because one had to design the
> > software for “near calls”, “far calls”, small arrays, large arrays and so
> > on. I think a segemented cpu is only usable if the maximum segment size is
> > as large as the memory in the target machine.
> >
> > – Reinhard
> >
> > —
> > NTDEV is sponsored by OSR
> >
> > For our schedule of WDF, WDM, debugging and other seminars visit:
> > http://www.osr.com/seminars
> >
> > To unsubscribe, visit the List Server section of OSR Online at
> > http://www.osronline.com/page.cfm?name=ListServer
>
>

Also do not forget that segment register loads are slow, since the
descriptor must be validated. This is the second - after portability - cause of
abandoning segments.

8086 - pre-286 - segments were just an ugly hack, and most ugliness of
MS-DOS architecture with all those EMS/XMS/VCPI ugly hacks are the direct
logical consequence of segmented addressing in 8086.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

“Alberto Moreira” wrote in message news:xxxxx@ntdev…
> Segments that stretched over the full 4Gb of virtual addressing space were
> introduced with the i386. Which is already 30 years old. Which allowed the
> “flat” OS architecture that’s used today in both Windows and Linux.
>
> Not to indulge in a personal jab, but I still marvel, after all these years,
> how little people know about the i386 architecture. It’s the exception
> rather than the rule to hear someone criticizing the i386 architecture from
> a position of expert knowledge! I suggest you read the Intel 386 Operating
> Systems Writer Guide, if you can still find it somewhere - it may be out of
> print. In there, they have a few suggestions for Operating Systems
> structure, which are rather different from both Windows and Unix/Linux. Some
> of us have toyed of coming up with an OS that would exploit the full power
> of the architecture, but somehow it never happened.
>
>
> Alberto.
>
>
> ----- Original Message -----
> From:
> To: “Windows System Software Devs Interest List”
> Sent: Friday, January 04, 2008 9:26 AM
> Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?
>
>
> > hi,
> >
> > when I first saw segmented cpus - I think it was an 8080 in 1979 or 1980,
> > the segmented architecture really seemed to be a good concept. Memory size
> > at that time was 64 KB, the maximal size of a segment was 64 KB too. But a
> > little time later system memory began to grow, but the segment size not.
> > And things began to really cost headaches because one had to design the
> > software for “near calls”, “far calls”, small arrays, large arrays and so
> > on. I think a segemented cpu is only usable if the maximum segment size is
> > as large as the memory in the target machine.
> >
> > – Reinhard
> >
> > —
> > NTDEV is sponsored by OSR
> >
> > For our schedule of WDF, WDM, debugging and other seminars visit:
> > http://www.osr.com/seminars
> >
> > To unsubscribe, visit the List Server section of OSR Online at
> > http://www.osronline.com/page.cfm?name=ListServer
>
>

>> > when I first saw segmented cpus - I think it was an 8080 in 1979 or

> > 1980,
> > the segmented architecture really seemed to be a good concept. Memory
> > size
> > at that time was 64 KB, the maximal size of a segment was 64 KB too.
> > But a
> > little time later system memory began to grow, but the segment size
> > not.
> > And things began to really cost headaches because one had to design the
> > software for “near calls”, “far calls”, small arrays, large arrays and
> > so
> > on. I think a segemented cpu is only usable if the maximum segment size
> > is
> > as large as the memory in the target machine.
> >
> > – Reinhard

Segmented address space has never been invented on the initial cut at *any*
usable CPU architecture as far as I know. And I know about CPU
architectures going back to the 1950s.

It has always been added by clever hardware designers when the machine is
expanded and the address field in the instruction (or some other register)
isn’t big enough anymore. This is always done “to avoid rearranging the
instruction format”, and always involves rearranging the instruciton format.
But the argument is that it is a smaller rearrangement than if they expanded
the address field. And software managers that have never seen a segmented
architecture believe this and buy off on it.

Then they start trying to recode the OS and compilers to deal with this
“minor change” and come in about 3-6 years late on the delivery deadline,
and very possibly after the managers and most of the development team has
been fired and replaced with a new team. Finally they make something
(anything) *mostly* work and ship it to keep the remaining customers, and
start on a new hardware architecture that will do it right and expand the
address containers. In a few lucky cases this has been caught internally by
managers brave enough to stand up and say “this sucks, we’ve wasted a year,
we are going to throw all of this away and redo the architecure right”, and
then build a new processor architecture that eliminates the segments before
the users ever find out about them.

Segmented architectures suck. On rare occasions you can maybe come up with
a clever hack that segments will make easier to do. But it is always a
hack, and would always be easier if the architecure had a flat way of doing
what you want to do. You are trying to take ‘advantage’ of vestigial junk
in the architecture, and you are making a one-off hack for some usage, and
not trying to write a whole OS. It might be real cool for the hack, but it
is a real pain for everyone else.

Loren

I’m always for the Why(s) :-). I don’t have any infos at hand, but I had the
luck to take couple courses ( oops once again I’m mentioneing here the name:
Fred Brooks). He had a four volume personal note he thought would see the
light of publishers press. Well it was good enough to be published, but he
just did not want to have a half-baked bread I suppose. But that lays out
the architectural processs that went thru the very begining of IBM machines
( including 701 to all the way upto x-286, including the RISC family). And
it had the best instruction simulation approach at that time ( that included
fetch,decode,fetch…, pipeline, and what not ).

I need to hunt down those four books, but what Loren said here matches with
what I learned and simulated at that time. That was back in 1986-88.

-pro

----- Original Message -----
From: “Loren Wilton”
To: “Windows System Software Devs Interest List”
Sent: Saturday, January 05, 2008 8:37 AM
Subject: Re: Re:[ntdev] RE:Are callbacks guaranteed to run consecutively?

>>> > when I first saw segmented cpus - I think it was an 8080 in 1979 or
>>> > 1980,
>>> > the segmented architecture really seemed to be a good concept. Memory
>>> > size
>>> > at that time was 64 KB, the maximal size of a segment was 64 KB too.
>>> > But a
>>> > little time later system memory began to grow, but the segment size
>>> > not.
>>> > And things began to really cost headaches because one had to design
>>> > the
>>> > software for “near calls”, “far calls”, small arrays, large arrays and
>>> > so
>>> > on. I think a segemented cpu is only usable if the maximum segment
>>> > size is
>>> > as large as the memory in the target machine.
>>> >
>>> > – Reinhard
>
> Segmented address space has never been invented on the initial cut at
> any usable CPU architecture as far as I know. And I know about CPU
> architectures going back to the 1950s.
>
> It has always been added by clever hardware designers when the machine is
> expanded and the address field in the instruction (or some other register)
> isn’t big enough anymore. This is always done “to avoid rearranging the
> instruction format”, and always involves rearranging the instruciton
> format. But the argument is that it is a smaller rearrangement than if
> they expanded the address field. And software managers that have never
> seen a segmented architecture believe this and buy off on it.
>
> Then they start trying to recode the OS and compilers to deal with this
> “minor change” and come in about 3-6 years late on the delivery deadline,
> and very possibly after the managers and most of the development team has
> been fired and replaced with a new team. Finally they make something
> (anything) mostly work and ship it to keep the remaining customers, and
> start on a new hardware architecture that will do it right and expand the
> address containers. In a few lucky cases this has been caught internally
> by managers brave enough to stand up and say “this sucks, we’ve wasted a
> year, we are going to throw all of this away and redo the architecure
> right”, and then build a new processor architecture that eliminates the
> segments before the users ever find out about them.
>
> Segmented architectures suck. On rare occasions you can maybe come up
> with a clever hack that segments will make easier to do. But it is always
> a hack, and would always be easier if the architecure had a flat way of
> doing what you want to do. You are trying to take ‘advantage’ of
> vestigial junk in the architecture, and you are making a one-off hack for
> some usage, and not trying to write a whole OS. It might be real cool for
> the hack, but it is a real pain for everyone else.
>
> Loren
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

That’s quite a note to happen upon.

mm

Prokash Sinha wrote:

I’m always for the Why(s) :-). I don’t have any infos at hand, but I had
the luck to take couple courses ( oops once again I’m mentioneing here
the name: Fred Brooks). He had a four volume personal note he thought
would see the light of publishers press. Well it was good enough to be
published, but he just did not want to have a half-baked bread I
suppose. But that lays out the architectural processs that went thru the
very begining of IBM machines ( including 701 to all the way upto x-286,
including the RISC family). And it had the best instruction simulation
approach at that time ( that included fetch,decode,fetch…, pipeline,
and what not ).

I need to hunt down those four books, but what Loren said here matches
with what I learned and simulated at that time. That was back in 1986-88.

-pro

----- Original Message ----- From: “Loren Wilton”
> To: “Windows System Software Devs Interest List”
> Sent: Saturday, January 05, 2008 8:37 AM
> Subject: Re: Re:[ntdev] RE:Are callbacks guaranteed to run consecutively?
>
>
>>>> > when I first saw segmented cpus - I think it was an 8080 in 1979
>>>> or > 1980,
>>>> > the segmented architecture really seemed to be a good concept.
>>>> Memory > size
>>>> > at that time was 64 KB, the maximal size of a segment was 64 KB
>>>> too. > But a
>>>> > little time later system memory began to grow, but the segment
>>>> size > not.
>>>> > And things began to really cost headaches because one had to
>>>> design > the
>>>> > software for “near calls”, “far calls”, small arrays, large arrays
>>>> and > so
>>>> > on. I think a segemented cpu is only usable if the maximum segment
>>>> > size is
>>>> > as large as the memory in the target machine.
>>>> >
>>>> > – Reinhard
>>
>> Segmented address space has never been invented on the initial cut at
>> any usable CPU architecture as far as I know. And I know about CPU
>> architectures going back to the 1950s.
>>
>> It has always been added by clever hardware designers when the machine
>> is expanded and the address field in the instruction (or some other
>> register) isn’t big enough anymore. This is always done “to avoid
>> rearranging the instruction format”, and always involves rearranging
>> the instruciton format. But the argument is that it is a smaller
>> rearrangement than if they expanded the address field. And software
>> managers that have never seen a segmented architecture believe this
>> and buy off on it.
>>
>> Then they start trying to recode the OS and compilers to deal with
>> this “minor change” and come in about 3-6 years late on the delivery
>> deadline, and very possibly after the managers and most of the
>> development team has been fired and replaced with a new team. Finally
>> they make something (anything) mostly work and ship it to keep the
>> remaining customers, and start on a new hardware architecture that
>> will do it right and expand the address containers. In a few lucky
>> cases this has been caught internally by managers brave enough to
>> stand up and say “this sucks, we’ve wasted a year, we are going to
>> throw all of this away and redo the architecure right”, and then build
>> a new processor architecture that eliminates the segments before the
>> users ever find out about them.
>>
>> Segmented architectures suck. On rare occasions you can maybe come up
>> with a clever hack that segments will make easier to do. But it is
>> always a hack, and would always be easier if the architecure had a
>> flat way of doing what you want to do. You are trying to take
>> ‘advantage’ of vestigial junk in the architecture, and you are making
>> a one-off hack for some usage, and not trying to write a whole OS. It
>> might be real cool for the hack, but it is a real pain for everyone else.
>>
>> Loren
>>
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>

> Portability is more important,

This is out of question - indeed, portability seems to be of major concern to OS designers. However, as I already said, this is mainly a commercial issue, rather than technical one…

so is POSIX compliance,

Again, the same story - even if the OS is not POSIX-compliant, it does not necessarily imply that it is technically inferior to POSIX-compliant ones. However, from purely commercial point of view, compliance with POSIX is, indeed, a major factor…

and I have a strong suspect POSIX contradicts segments.

I don’t think so. After all, POSIX is all about relatively high level of abstraction. AFAIK, it does not deal even with the system calls . For example, it does not seem to care if low-level IO functions like open() and friends are the actual system calls or just C library functions - as long as they are available, the system is considered POSIX-compliant. The issue of segmentation arises only at the assembly level, so that I don’t think POSIX goes to the level that low…

Anton Bassov

Sorry to get get this thd going again. I need to learn MetaPhysics to link
this to the subject hdr of this thd :slight_smile:

But, fwiw -

http://tesugen.com/archives/04/08/brooks-architecture-2

Has some of it. Seems like he afterall published it in some form!!. The
computer zoo is a good one chapter. Also look for the out-of-print pdf file
at the end of this page.

Still need to huntdown his manuscripts !

-pro

----- Original Message -----
From: “Martin O’Brien”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Saturday, January 05, 2008 11:32 AM
Subject: Re:[ntdev] Are callbacks guaranteed to run consecutively?

> That’s quite a note to happen upon.
>
> mm
>
> Prokash Sinha wrote:
>> I’m always for the Why(s) :-). I don’t have any infos at hand, but I had
>> the luck to take couple courses ( oops once again I’m mentioneing here
>> the name: Fred Brooks). He had a four volume personal note he thought
>> would see the light of publishers press. Well it was good enough to be
>> published, but he just did not want to have a half-baked bread I suppose.
>> But that lays out the architectural processs that went thru the very
>> begining of IBM machines ( including 701 to all the way upto x-286,
>> including the RISC family). And it had the best instruction simulation
>> approach at that time ( that included fetch,decode,fetch…, pipeline, and
>> what not ).
>>
>> I need to hunt down those four books, but what Loren said here matches
>> with what I learned and simulated at that time. That was back in 1986-88.
>>
>> -pro
>>
>> ----- Original Message ----- From: “Loren Wilton”
>> To: “Windows System Software Devs Interest List”
>> Sent: Saturday, January 05, 2008 8:37 AM
>> Subject: Re: Re:[ntdev] RE:Are callbacks guaranteed to run consecutively?
>>
>>
>>>>> > when I first saw segmented cpus - I think it was an 8080 in 1979
>>>>> or > 1980,
>>>>> > the segmented architecture really seemed to be a good concept.
>>>>> Memory > size
>>>>> > at that time was 64 KB, the maximal size of a segment was 64 KB
>>>>> too. > But a
>>>>> > little time later system memory began to grow, but the segment
>>>>> size > not.
>>>>> > And things began to really cost headaches because one had to
>>>>> design > the
>>>>> > software for “near calls”, “far calls”, small arrays, large arrays
>>>>> and > so
>>>>> > on. I think a segemented cpu is only usable if the maximum segment
>>>>> > size is
>>>>> > as large as the memory in the target machine.
>>>>> >
>>>>> > – Reinhard
>>>
>>> Segmented address space has never been invented on the initial cut at
>>> any usable CPU architecture as far as I know. And I know about CPU
>>> architectures going back to the 1950s.
>>>
>>> It has always been added by clever hardware designers when the machine
>>> is expanded and the address field in the instruction (or some other
>>> register) isn’t big enough anymore. This is always done “to avoid
>>> rearranging the instruction format”, and always involves rearranging the
>>> instruciton format. But the argument is that it is a smaller
>>> rearrangement than if they expanded the address field. And software
>>> managers that have never seen a segmented architecture believe this and
>>> buy off on it.
>>>
>>> Then they start trying to recode the OS and compilers to deal with this
>>> “minor change” and come in about 3-6 years late on the delivery
>>> deadline, and very possibly after the managers and most of the
>>> development team has been fired and replaced with a new team. Finally
>>> they make something (anything) mostly work and ship it to keep the
>>> remaining customers, and start on a new hardware architecture that will
>>> do it right and expand the address containers. In a few lucky cases
>>> this has been caught internally by managers brave enough to stand up and
>>> say “this sucks, we’ve wasted a year, we are going to throw all of this
>>> away and redo the architecure right”, and then build a new processor
>>> architecture that eliminates the segments before the users ever find out
>>> about them.
>>>
>>> Segmented architectures suck. On rare occasions you can maybe come up
>>> with a clever hack that segments will make easier to do. But it is
>>> always a hack, and would always be easier if the architecure had a flat
>>> way of doing what you want to do. You are trying to take ‘advantage’ of
>>> vestigial junk in the architecture, and you are making a one-off hack
>>> for some usage, and not trying to write a whole OS. It might be real
>>> cool for the hack, but it is a real pain for everyone else.
>>>
>>> Loren
>>>
>>>
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

Prokash,

Excellent link!!! Thanks a lot!!!

It may sound ridiculous, but it looks like if you want some “fresh” ideas, the best thing to do is to get them from 50s-60s documentation - back then people seemed to investigate various possibilities. However, these days all texts seem to describe only the OSes that proved to be commercially successful, which does not necessarily imply they are technically superior…

Anton Bassov

Thanks a lot, Pro. It seems that something useful came out of this long
thread after all.

Thanks,

mm
Prokash Sinha wrote:

Sorry to get get this thd going again. I need to learn MetaPhysics to
link this to the subject hdr of this thd :slight_smile:

But, fwiw -

http://tesugen.com/archives/04/08/brooks-architecture-2

Has some of it. Seems like he afterall published it in some form!!. The
computer zoo is a good one chapter. Also look for the out-of-print pdf
file at the end of this page.

Still need to huntdown his manuscripts !

-pro

----- Original Message ----- From: “Martin O’Brien”

> Newsgroups: ntdev
> To: “Windows System Software Devs Interest List”
> Sent: Saturday, January 05, 2008 11:32 AM
> Subject: Re:[ntdev] Are callbacks guaranteed to run consecutively?
>
>
>> That’s quite a note to happen upon.
>>
>> mm
>>
>> Prokash Sinha wrote:
>>> I’m always for the Why(s) :-). I don’t have any infos at hand, but I
>>> had the luck to take couple courses ( oops once again I’m mentioneing
>>> here the name: Fred Brooks). He had a four volume personal note he
>>> thought would see the light of publishers press. Well it was good
>>> enough to be published, but he just did not want to have a half-baked
>>> bread I suppose. But that lays out the architectural processs that
>>> went thru the very begining of IBM machines ( including 701 to all
>>> the way upto x-286, including the RISC family). And it had the best
>>> instruction simulation approach at that time ( that included
>>> fetch,decode,fetch…, pipeline, and what not ).
>>>
>>> I need to hunt down those four books, but what Loren said here
>>> matches with what I learned and simulated at that time. That was back
>>> in 1986-88.
>>>
>>> -pro
>>>
>>> ----- Original Message ----- From: “Loren Wilton”
>>>
>>> To: “Windows System Software Devs Interest List”
>>> Sent: Saturday, January 05, 2008 8:37 AM
>>> Subject: Re: Re:[ntdev] RE:Are callbacks guaranteed to run
>>> consecutively?
>>>
>>>
>>>>>> > when I first saw segmented cpus - I think it was an 8080 in 1979
>>>>>> or > 1980,
>>>>>> > the segmented architecture really seemed to be a good concept.
>>>>>> Memory > size
>>>>>> > at that time was 64 KB, the maximal size of a segment was 64 KB
>>>>>> too. > But a
>>>>>> > little time later system memory began to grow, but the segment
>>>>>> size > not.
>>>>>> > And things began to really cost headaches because one had to
>>>>>> design > the
>>>>>> > software for “near calls”, “far calls”, small arrays, large arrays
>>>>>> and > so
>>>>>> > on. I think a segemented cpu is only usable if the maximum
>>>>>> segment > size is
>>>>>> > as large as the memory in the target machine.
>>>>>> >
>>>>>> > – Reinhard
>>>>
>>>> Segmented address space has never been invented on the initial cut
>>>> at any usable CPU architecture as far as I know. And I know about
>>>> CPU architectures going back to the 1950s.
>>>>
>>>> It has always been added by clever hardware designers when the
>>>> machine is expanded and the address field in the instruction (or
>>>> some other register) isn’t big enough anymore. This is always done
>>>> “to avoid rearranging the instruction format”, and always involves
>>>> rearranging the instruciton format. But the argument is that it is a
>>>> smaller rearrangement than if they expanded the address field. And
>>>> software managers that have never seen a segmented architecture
>>>> believe this and buy off on it.
>>>>
>>>> Then they start trying to recode the OS and compilers to deal with
>>>> this “minor change” and come in about 3-6 years late on the delivery
>>>> deadline, and very possibly after the managers and most of the
>>>> development team has been fired and replaced with a new team.
>>>> Finally they make something (anything) mostly work and ship it to
>>>> keep the remaining customers, and start on a new hardware
>>>> architecture that will do it right and expand the address
>>>> containers. In a few lucky cases this has been caught internally by
>>>> managers brave enough to stand up and say “this sucks, we’ve wasted
>>>> a year, we are going to throw all of this away and redo the
>>>> architecure right”, and then build a new processor architecture that
>>>> eliminates the segments before the users ever find out about them.
>>>>
>>>> Segmented architectures suck. On rare occasions you can maybe come
>>>> up with a clever hack that segments will make easier to do. But it
>>>> is always a hack, and would always be easier if the architecure had
>>>> a flat way of doing what you want to do. You are trying to take
>>>> ‘advantage’ of vestigial junk in the architecture, and you are
>>>> making a one-off hack for some usage, and not trying to write a
>>>> whole OS. It might be real cool for the hack, but it is a real pain
>>>> for everyone else.
>>>>
>>>> Loren
>>>>
>>>>
>>>>
>>>> —
>>>> NTDEV is sponsored by OSR
>>>>
>>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>>> http://www.osr.com/seminars
>>>>
>>>> To unsubscribe, visit the List Server section of OSR Online at
>>>> http://www.osronline.com/page.cfm?name=ListServer
>>>
>>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>

wrote in message news:xxxxx@ntdev…
>> Portability is more important,
>
> This is out of question - indeed, portability seems to be of major concern
> to OS designers. However, as I already said, this is mainly a commercial
> issue, rather than technical one…
>
>> so is POSIX compliance,
>
> Again, the same story - even if the OS is not POSIX-compliant, it does
> not necessarily imply that it is technically inferior to POSIX-compliant
> ones. However, from purely commercial point of view, compliance with POSIX
> is, indeed, a major factor…
>
>> and I have a strong suspect POSIX contradicts segments.
>
> I don’t think so. After all, POSIX is all about relatively high level of
> abstraction. AFAIK, it does not deal even with the system calls . For
> example, it does not seem to care if low-level IO functions like open()
> and friends are the actual system calls or just C library functions - as
> long as they are available, the system is considered POSIX-compliant. The
> issue of segmentation arises only at the assembly level, so that I don’t
> think POSIX goes to the level that low…
>

POSIX will work with segements. The challenge is that most “C” programmers
never consider problems like multiple pointer types (for instance I have
programmed “C” on a machine where CHAR pointer were different than all
others) or differing data segments (try the fun of a system which has a code
space and a data space, most “C” programs barf here since you can have two
pointer with the same value pointing to different things: a routine and
data).


Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply

I like segments, especially the more interesting ones in the 8088. I
learned how to handle segments, so it was good for me. We still have
segments in Windows on x86 CPUs. They are blanket full memory segments, but
they still exist. A couple are still used by Windows and change when the OS
switches to another process. Intel did not improve the segment load
instructions since Windows was using the page tables to do some of the
things segments were designed to do. Recent NX bits in the page tables are
just to permit non-executable pages instead of using segments where if the
memory is not accessable by the CS selector, it can’t be executed.

Yes, use of segments and the other rings present in the Intel/AMD cpus could
improve security, though it would require a lot of rewrites to change
Windows and the many drivers already released. With the momentum possessed
by Windows, I don’t expect to see any of this implemented. No matter how
easy it is to speed up instructions in CPUs by using hardwired gates, most
designers will not dedicate the gates to instructions the customer is not
using. It is somewhat the classic chicken and egg problem.

Apple is such a cult, they can get away with just dumping old soltware and
hardware every few years. Now that they have switched to Intel, I am
doubtful that they can continue to do this switch. It will be fun to watch
during the next decade.

“Loren Wilton” wrote in message news:xxxxx@ntdev…
>>> > when I first saw segmented cpus - I think it was an 8080 in 1979 or
>>> > 1980,
>>> > the segmented architecture really seemed to be a good concept. Memory
>>> > size
>>> > at that time was 64 KB, the maximal size of a segment was 64 KB too.
>>> > But a
>>> > little time later system memory began to grow, but the segment size
>>> > not.
>>> > And things began to really cost headaches because one had to design
>>> > the
>>> > software for “near calls”, “far calls”, small arrays, large arrays and
>>> > so
>>> > on. I think a segemented cpu is only usable if the maximum segment
>>> > size is
>>> > as large as the memory in the target machine.
>>> >
>>> > – Reinhard
>
> Segmented address space has never been invented on the initial cut at
> any usable CPU architecture as far as I know. And I know about CPU
> architectures going back to the 1950s.
>
> It has always been added by clever hardware designers when the machine is
> expanded and the address field in the instruction (or some other register)
> isn’t big enough anymore. This is always done “to avoid rearranging the
> instruction format”, and always involves rearranging the instruciton
> format. But the argument is that it is a smaller rearrangement than if
> they expanded the address field. And software managers that have never
> seen a segmented architecture believe this and buy off on it.
>
> Then they start trying to recode the OS and compilers to deal with this
> “minor change” and come in about 3-6 years late on the delivery deadline,
> and very possibly after the managers and most of the development team has
> been fired and replaced with a new team. Finally they make something
> (anything) mostly work and ship it to keep the remaining customers, and
> start on a new hardware architecture that will do it right and expand the
> address containers. In a few lucky cases this has been caught internally
> by managers brave enough to stand up and say “this sucks, we’ve wasted a
> year, we are going to throw all of this away and redo the architecure
> right”, and then build a new processor architecture that eliminates the
> segments before the users ever find out about them.
>
> Segmented architectures suck. On rare occasions you can maybe come up
> with a clever hack that segments will make easier to do. But it is always
> a hack, and would always be easier if the architecure had a flat way of
> doing what you want to do. You are trying to take ‘advantage’ of
> vestigial junk in the architecture, and you are making a one-off hack for
> some usage, and not trying to write a whole OS. It might be real cool for
> the hack, but it is a real pain for everyone else.
>
> Loren
>
>
>

Yet… What’s “slow” in this context ? Indeed it takes a few cycles to load
a segment register, but then, it takes a lot less time to move segment
descriptors around than to copy buffers. One major use of segments is to
represent a range of virtual addressing space, so that we move data from
process to process and from user to kernel side, and vice-versa, by loading
and reloading segment registers and by manipulating descriptors. Beats
copying data by a long shot, and it’s way more secure.

And again, the i386 is around 30 years old. It has 4Gb segments, and many
other refinements of the architecture, including a combination of
segmentation and paging. It’s not at all a 286!

Alberto.

----- Original Message -----
From: “Maxim S. Shatskih”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Saturday, January 05, 2008 11:08 AM
Subject: Re:[ntdev] RE:Are callbacks guaranteed to run consecutively?

> Also do not forget that segment register loads are slow, since the
> descriptor must be validated. This is the second - after portability -
> cause of
> abandoning segments.
>
> 8086 - pre-286 - segments were just an ugly hack, and most ugliness of
> MS-DOS architecture with all those EMS/XMS/VCPI ugly hacks are the direct
> logical consequence of segmented addressing in 8086.
>
> –
> Maxim Shatskih, Windows DDK MVP
> StorageCraft Corporation
> xxxxx@storagecraft.com
> http://www.storagecraft.com
>
> “Alberto Moreira” wrote in message news:xxxxx@ntdev…
>> Segments that stretched over the full 4Gb of virtual addressing space
>> were
>> introduced with the i386. Which is already 30 years old. Which allowed
>> the
>> “flat” OS architecture that’s used today in both Windows and Linux.
>>
>> Not to indulge in a personal jab, but I still marvel, after all these
>> years,
>> how little people know about the i386 architecture. It’s the exception
>> rather than the rule to hear someone criticizing the i386 architecture
>> from
>> a position of expert knowledge! I suggest you read the Intel 386
>> Operating
>> Systems Writer Guide, if you can still find it somewhere - it may be out
>> of
>> print. In there, they have a few suggestions for Operating Systems
>> structure, which are rather different from both Windows and Unix/Linux.
>> Some
>> of us have toyed of coming up with an OS that would exploit the full
>> power
>> of the architecture, but somehow it never happened.
>>
>>
>> Alberto.
>>
>>
>> ----- Original Message -----
>> From:
>> To: “Windows System Software Devs Interest List”
>> Sent: Friday, January 04, 2008 9:26 AM
>> Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?
>>
>>
>> > hi,
>> >
>> > when I first saw segmented cpus - I think it was an 8080 in 1979 or
>> > 1980,
>> > the segmented architecture really seemed to be a good concept. Memory
>> > size
>> > at that time was 64 KB, the maximal size of a segment was 64 KB too.
>> > But a
>> > little time later system memory began to grow, but the segment size
>> > not.
>> > And things began to really cost headaches because one had to design the
>> > software for “near calls”, “far calls”, small arrays, large arrays and
>> > so
>> > on. I think a segemented cpu is only usable if the maximum segment size
>> > is
>> > as large as the memory in the target machine.
>> >
>> > – Reinhard
>> >
>> > —
>> > NTDEV is sponsored by OSR
>> >
>> > For our schedule of WDF, WDM, debugging and other seminars visit:
>> > http://www.osr.com/seminars
>> >
>> > To unsubscribe, visit the List Server section of OSR Online at
>> > http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Posix can be easily implemented on one ring and be blissfully unaware of the
segmented architecture. And then, how many i386 users out there need Posix
compliance ?

Alberto.

----- Original Message -----
From: “Maxim S. Shatskih”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Saturday, January 05, 2008 11:06 AM
Subject: Re:[ntdev] RE:Are callbacks guaranteed to run consecutively?

> Portability is more important, so is POSIX compliance, and I have a
> strong
> suspect POSIX contradicts segments.
>
> –
> Maxim Shatskih, Windows DDK MVP
> StorageCraft Corporation
> xxxxx@storagecraft.com
> http://www.storagecraft.com
>
> “Alberto Moreira” wrote in message news:xxxxx@ntdev…
>> Segments that stretched over the full 4Gb of virtual addressing space
>> were
>> introduced with the i386. Which is already 30 years old. Which allowed
>> the
>> “flat” OS architecture that’s used today in both Windows and Linux.
>>
>> Not to indulge in a personal jab, but I still marvel, after all these
>> years,
>> how little people know about the i386 architecture. It’s the exception
>> rather than the rule to hear someone criticizing the i386 architecture
>> from
>> a position of expert knowledge! I suggest you read the Intel 386
>> Operating
>> Systems Writer Guide, if you can still find it somewhere - it may be out
>> of
>> print. In there, they have a few suggestions for Operating Systems
>> structure, which are rather different from both Windows and Unix/Linux.
>> Some
>> of us have toyed of coming up with an OS that would exploit the full
>> power
>> of the architecture, but somehow it never happened.
>>
>>
>> Alberto.
>>
>>
>> ----- Original Message -----
>> From:
>> To: “Windows System Software Devs Interest List”
>> Sent: Friday, January 04, 2008 9:26 AM
>> Subject: RE:[ntdev] Are callbacks guaranteed to run consecutively?
>>
>>
>> > hi,
>> >
>> > when I first saw segmented cpus - I think it was an 8080 in 1979 or
>> > 1980,
>> > the segmented architecture really seemed to be a good concept. Memory
>> > size
>> > at that time was 64 KB, the maximal size of a segment was 64 KB too.
>> > But a
>> > little time later system memory began to grow, but the segment size
>> > not.
>> > And things began to really cost headaches because one had to design the
>> > software for “near calls”, “far calls”, small arrays, large arrays and
>> > so
>> > on. I think a segemented cpu is only usable if the maximum segment size
>> > is
>> > as large as the memory in the target machine.
>> >
>> > – Reinhard
>> >
>> > —
>> > NTDEV is sponsored by OSR
>> >
>> > For our schedule of WDF, WDM, debugging and other seminars visit:
>> > http://www.osr.com/seminars
>> >
>> > To unsubscribe, visit the List Server section of OSR Online at
>> > http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

> Also do not forget that segment register loads are slow, since the descriptor

must be validated.

Wromg!!! Segment descriptors are validated not when you load them but every time you access memory, because before the CPU can access physical memory ( either directly if paging is disabled or after having done virtual-to-physical translation if paging is enabled), it first needs to form a liner address from segment base and address offset , and validate it (if segment base is 0, then linear address validation is bound to be successful). Therefore, descriptor validation gets done upon every instruction that get executed, no matter which memory model is being used.

In order to avoid reloading descriptors upon every instruction’s execution, descriptors of currently loaded segments are cached by CPU and get reloaded only when you change segments (or do an operation that requires changing them behind the scenes - for example, INT N instruction) . This is what the trick with so-called “unreal mode” is based upon - you enter the protected mode without paging enabled, jump to 32-bit code segment, load 32-bit DS,SS and ES, and clear PE flag in CR0. At this point the CPU is back in the real-address mode, but as long as you don’t change segment descriptors, you are able to address all 4G of memory.

Therefore, the only overhead that using segmentation implies results from the possibility of (theoretically) more frequent descriptor reloads. However, taking into consideration the fact that even flat memory model implies reloading descriptors upon every interrupt and user-to-kernel (and vice versa) mode transition anyway, the additional overhead seems to be just negligible…

Anton Bassov

Loren Wilton wrote:

Segmented address space has never been invented on the initial cut at
*any* usable CPU architecture as far as I know. And I know about CPU
architectures going back to the 1950s.

As I’ve pointed out before both here and on the newsgroups, Control
Data’s 64-bit Cyber 180 mainframes were designed from the ground up with
both segments and rings (16). They were developed and released during
the early 1980s, and were strongly influenced by the Multics research.
It was an entirely new design, unrelated to anything CDC had done before.

The Cyber 180s were flexible, powerful, and affordable (in mainframe
terms). I really enjoyed hacking them. If CDC hadn’t utterly misread
the PC revolution, and if the mainframe world had not already been
circling the drain at that point, it could have been a real winner.

CDC actually had its own personal computer around 1980 – an 8" floppy
based thing with a Z80 chip and a competent monochrome graphics card.
However, institutionally, they could not allow themselves to see it as
anything other than a mainframe terminal. They had some cool
computer-based training apps for it, but in the end they never marketed
it as a standalone computer, and ceded the market to IBM.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> space and a data space, most “C” programs barf here since you can have two

pointer with the same value pointing to different things: a routine and
data).

At least C++ is OK with this.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

Actually one of the worst examples of this was the original C++
implementation, but in general C++ does not bar the behavior that cause the
problems. Good programming will do it for either language, but a heck of a
lot of code is not that smart.


Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply

“Maxim S. Shatskih” wrote in message
news:xxxxx@ntdev…
>> space and a data space, most “C” programs barf here since you can have
>> two
>> pointer with the same value pointing to different things: a routine and
>> data).
>
> At least C++ is OK with this.
>
> –
> Maxim Shatskih, Windows DDK MVP
> StorageCraft Corporation
> xxxxx@storagecraft.com
> http://www.storagecraft.com
>
>

The thing to learn here is that to be (more) portable you have to compile
for multiple targets with different architectures. Since most drivers and
apps are for one architecture (x86) (so far) you don’t get to do this so we
all get into sloppy habits of casting between pointer types and even to
ints, throwing away the “good enough” type checking that C provides for our
benefit.

I’ve discovered that you haven’t debugged your source code until it is
compiled for several targets. When we had to make many libraries work on Mac
PPC (big endian) and an embedded processor (which has 32 bit chars so
sizeof(char)==sizeof(long)==1) we discovered many bugs that just didn’t show
on the original x86 target, but were there to bite one day. I’m sure many
Mac apps are currently more robust than ever at the moment because they are
all compiled and tested on Intel and PPC.

Now we all have x64 targets as standard for Windows at least we can compile
everything for 64 bit targets as well as x86, even if we don’t use them or
have any intention of releasing them in this format.

So just like the old saying “If you haven’t debugged on an MP machine you
haven’t debugged”, we could add “If you haven’t compiled for multiple
targets, you haven’t debugged”.

Mike

>>>>>>>>----- Original Message -----
From: Don Burn
Newsgroups: ntdev
To: Windows System Software Devs Interest List
Sent: Monday, January 07, 2008 9:21 PM
Subject: Re:[ntdev] Are callbacks guaranteed to run consecutively?

Actually one of the worst examples of this was the original C++
implementation, but in general C++ does not bar the behavior that cause the
problems. Good programming will do it for either language, but a heck of a
lot of code is not that smart.