Well, take examples first !
Sun micro, an Unix vendor by itself… They started java, and they
strongly approached JavaOS… Why? There could be very many reasons for
it, right?..
But the fact was that, their research community showed them how
dangerous was C (even to kernel programming)… A very good friend of
mine is a vetaran there for almost 20 yrs, so I get to hear some stories
about it…
When I was in graduate school in the 80s ( that was only 10+ some year
from C’s birth ), there were huge efforts towards functional programming
already… Why? Perhaps some was just for fun, perhaps some was to show
how we can make science out of it, perhaps something else … But as I
understand, they already saw the need …
When Ken Thomson declared his OS, he mentioned less than (or just over)
a thousand lines of Assembly code, rest is C… What so great about it,
while IBM OS written in BAL dutifully made the world happy by keeping
internation financial transactions happy?
In the 70s parsing was an ART, actually an arcane one. Now lot of
science in it …
Behind all of these, there are things all can feel, some observes, and
some other measures. And when it make sense, they move forward.
Not everything has to be labeled under some flag or ism?. Do we?
Memory has a relationship with the underlying language, so it is not
that off tangent …
Couple question(s) would be —
(0) In the future ( let say 10 yrs from now), how much code would and
should be on your machine? And how much of it in privilage mode?.Then
what would be the machine like ? Can it just be a presentation device?
Can it be like a super-computer?
(1) What are defining criteria for measuring systems fundamental traits?
What are the fundamental traits? What are the methods to measure?..
And a whole of questions …
Lot of them are defined and or understood by people who drives the
researches and recommends new and hopefully better means to achieve
things… Denying it would be fine, but under what basis???
-pro
xxxxx@hotmail.com wrote:
> No. Not even for a second. Seriously
>
Well, perhaps, you should give it a try…
> I find such thinking in the face of massive technological advancement, and
> multiple magnitude paradigm shifts, to be positively laughable.
>
Well, you can laugh as much as you want, but there are some certain things that just never change. Just to give you an idea, somehow it happened that, since the beginning of the world, people have been walking with their faces forward, rather than with their backs forward while looking over their shoulder. Please note that there are no biological reasons why the latter cannot be done - if you try it, you will see that it can be done pretty easily. The only problem is that you are going to find it inconvenient for yourself, so that, after a short while, you are going to give up all your ideas of making some kind of " revolution in walking", no matter how desperate you are to bring some innovation into this world.
I strongly suspect that we are facing more or less the same scenario - although there are no technical reasons that make writing OSes in managed languages impossible, people just find it inconvenient and unreasonable. Therefore, I am afraid you are just advertising some kind of “revolutionary approach to walking”…
> You can keep beating on your car with that buggy whip that you insist on using.
>
The above example would be appropriate if modern computers relied upon some logical principles that are basically different from the ones that earlier computers relied upon. For example, when (and if) quantum computer gets built, apparently, all concepts that we know today will become meaningless, and, at this point, your example becomes appropriate. However, for the time being, what we are getting are more and more powerful horses that, despite their increasing power, are still horses, so that the good old whip is still the right tool for them…
> Because… that makes the features that are intrinsic to the language (which is what we’re talking about)
>
Sorry, this is NOT what we are talking about. What we are talking about is whether use of a given language *for a given purpose* is reasonable - we are not discussing relative strengths and weaknesses of any given language, are we??? As I said already, I have nothing against C# or any other managed language- they all have their own applications, and in many cases it is more reasonable to use these languages, rather than C. However, kernel-level development is not among these cases - this is the only thing I am saying…
> You compile your C code for the Unix kernel and, BINGO! You’ll never ++ your way past the
> end of a buffer again!
>
The larger the amount of your code, the higher the chance of a bug in it. If your code is modular and broken into numerous small-sized components with very well-defined and specific functionalities, it is much easier for you to see your bug. UNIX kernel encourages you to keep all your components small and specific for their purpose, while Windows forces you to do exactly the opposite, which tremendously increases the complexity of your code, and, hence, chances of introducing bugs into it…
> Those Unix folks ARE clever…
>
Well, they are not just clever- they are true geniuses. You really need to be the one in order to invent something like UNIX. To be honest, I just cannot possibly imagine anything more reasonable than that. It is pretty much like LEGO - you’ve got numerous small pieces with specific well-defined purposes and interfaces that you can dynamically combine *at the run time* in a way that suits you in a given situation, which is not necessarily known at the compile time. As long as piece X appears as piece Y to the client code, they can be substituted for one another, and none of them cares whether the actual client is a local user who types on the terminal, or a program that is located on another continent… The same is true for client code - it has no idea who is actually doing the job that it has requested. To make it even more interesting, these components may be written in different languages , which is completely transparent to their clients. All these components run on top of relatively smal kernel that interfaces the hardware and provides some basic services - whenever it needs something that may be cumbersome or dodgy, it delegates the job to the UM components. Perfectly reasonable system, don’t you think???
Shit, it looks like I am already talking about some “revolutionary object-oriented distributed technology” like COM. Nope - in actuality, this is just a good old UNIX of 1970s, so that whenever you hear about some fancy distributed technology, be sure that this is just a new implementation of a “good old wheel” that had been invented 40 years ago…
Around a week ago you asked me to explain how come that Cutler’s designs because so successful, but somehow failed to provide any convincing evidence of this success. AFAIK, all his designs were successful in this or that field only until they faced competition from UNIX-like systems in a given field.
IMHO, the main reason for this is complete lack of flexibility of his designs - unlike UNIX-like systems, they are all rigid and tightly-coupled…
Anton Bassov
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer