Write to screen inside kernel driver

I don’t get it why people that need a RTOS don’t use RTOS. There are
free RTOSes this days. And they have real APIs, suitable for RT work.

On the other hand, the hack described in this thread is awesome, but
very, very… wrong.

I’d still like to know what the OP is doing with this thing.


Aram Hăvărneanu

It works like this:

The hardware designer saves $0.05 on each product

The driver writers expend an extra $100,000 in developing the driver

So you have to sell 2,000,000 units to break even with that decision, and
you haven’t even got it out the door yet.

Then, because of the complex (and probably incorrect) driver, which had to
do a lot of weird things to work around the hardware defect, you end up
spending another $250,000 in post-deployment support costs.

Now you have to have 3.5Munits sales before you break even. And that
ASSUMES my estimates are not artificially low and therefore far below actual
costs.

The consequence of this is that your drivers are so bad that nobody in their
right mind will ever buy another product with your brand name on it.

That’s INFINITE cost.

All for want of a 5-cent part.

(Never forget the Diamond Video lesson: if you want a fast driver, you
accomplish it by getting rid of all those pointless bounds-checking and
parameter-validation tests. I remember a call that when sort of like this:
me: And when I’m using PowerPoint, it crashes
DV: do you have the latest driver?
me: I just downloaded it at 10am this morning
DV: No, that’s not the latest; you want the 2pm build
I removed the card hours later, having bought a replacement, and never again
bought a Diamond Video product. Neither did anyone else who ever owned or
heard of them.)

The problem comes when each division sees a “bottom line” and nobody in
management understands the cascading interactions between the decisions as
they move down the line. Think of the people who don’t want to use
scatter-gather DMA chips because they “cost too much”.

HP builds superb hardware, and my MTBF of my OS dropped to < 30 minutes each
time, in the past, when I put a piece of HP hardware on it; and the crashes
were always access faults in the HP drivers (usually NULL-pointer
references). Then they build a first-rate, robust product that will last
ten years, price it accordingly, and I can only use it for 18 months. After
that, I can no longer get drivers for it, so this superb piece of hardware
sits on the shelf in my basement. So my decision: I will never, again, in
my life, actually buy a product with the HP logo on it. I can buy a piece
of junk from another vendor, use it for 18 months and, suprisingly, never
have a crash traceable to their driver (in fact, never have a crash), and
when I get a new OS and no longer have support, I don’t feel bad throwing it
out because I paid so little for it. HP’s refusal to produce reliable
drivers or to support products across the lifetime of the product (not its
internal marketing lifetime; but over the lifetime it was designed to have),
means that I will not buy their products. And they call this a “good
business decision”. Furthermore, I do not recommened HP products to my
clients, and I use a logged crash from an HP driver as an example of how to
read a crash dump (it was the only third-party crash dump that existed on my
machine! In fact, the six crashes of that driver, all within a couple
hours, were the ONLY crashes I had in three years of XP usage).

A corporate vice-president should ask “What is this going to really cost us,
and how much is it going to save us?” One who doesn’t is incompetent. Only
an idiot would make a decision based on one of a dozen parameters. How much
will the lack of it add to the cost of a driver? How much will lack of it
add to the cost of deployment? How much will the lack of it add to
post-deployment support? How much will the lack of this part extend the
development cycle and delay time-to-market? These are among the questions
that should be asked, and I find that most people don’t even know how to ask
them. And the hardware manager says “No on my bottom line!” and passes the
problem to the driver team. Use of the event log to log failures takes
time, and training, and testing, and the driver manager says “not on my
bottom line!” and passes the problem to customer support. By that time it
is too late, and the customer support manager gets stuck with a massive
bottom line, AND ALL OF THESE DIVISIONS ARE PART OF THE SAME CORPORATION!
It’s all the same pocket, even if it comes from different wallets in that
one pocket.

I saw a situation in which a bug in the USB firmware meant that the device
would malfunction, but fixing it required going back to the firmware team to
get new firmware, and that was never going to happen because of the failed
concept of “cost”. My client was the end user of this failed product, and
it was a critical part of their deliverable. The project failed because
they had committed too much to a product that was badly designed. The
original vendor lost tens of thousands of units sales because my client’s
product would not be sold.

I imagine anyone who has been in the driver business longer than six months
can tell similar stories.
jor

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Tim Roberts
Sent: Monday, June 14, 2010 1:20 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] Write to screen inside kernel driver

On 6/14/2010 9:50 AM, xxxxx@osr.com wrote:

Truer words have never been spoken.

The performance opportunities lost, and CPU cycles wasted, all for lack of
a fifty cent part. Oh, the stories I could tell…

True. On the other hand, I was once in a meeting with a rep from a “major
printer manufacturer” who pointed out, “We ship a million printers a month.
If you want to add 5 cents to the cost of goods, you need the approval of a
corporate vice president.”


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

Probably a good reason that why experienced driver developers should participate while the hardware is being built…
– Aj.

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of Joseph M. Newcomer
Sent: Monday, June 14, 2010 1:32 PM
To: Windows System Software Devs Interest List
Subject: RE: [ntdev] Write to screen inside kernel driver

It works like this:

The hardware designer saves $0.05 on each product

The driver writers expend an extra $100,000 in developing the driver

So you have to sell 2,000,000 units to break even with that decision, and
you haven’t even got it out the door yet.

Then, because of the complex (and probably incorrect) driver, which had to
do a lot of weird things to work around the hardware defect, you end up
spending another $250,000 in post-deployment support costs.

Now you have to have 3.5Munits sales before you break even. And that
ASSUMES my estimates are not artificially low and therefore far below actual
costs.

The consequence of this is that your drivers are so bad that nobody in their
right mind will ever buy another product with your brand name on it.

That’s INFINITE cost.

All for want of a 5-cent part.

(Never forget the Diamond Video lesson: if you want a fast driver, you
accomplish it by getting rid of all those pointless bounds-checking and
parameter-validation tests. I remember a call that when sort of like this:
me: And when I’m using PowerPoint, it crashes
DV: do you have the latest driver?
me: I just downloaded it at 10am this morning
DV: No, that’s not the latest; you want the 2pm build
I removed the card hours later, having bought a replacement, and never again
bought a Diamond Video product. Neither did anyone else who ever owned or
heard of them.)

The problem comes when each division sees a “bottom line” and nobody in
management understands the cascading interactions between the decisions as
they move down the line. Think of the people who don’t want to use
scatter-gather DMA chips because they “cost too much”.

HP builds superb hardware, and my MTBF of my OS dropped to < 30 minutes each
time, in the past, when I put a piece of HP hardware on it; and the crashes
were always access faults in the HP drivers (usually NULL-pointer
references). Then they build a first-rate, robust product that will last
ten years, price it accordingly, and I can only use it for 18 months. After
that, I can no longer get drivers for it, so this superb piece of hardware
sits on the shelf in my basement. So my decision: I will never, again, in
my life, actually buy a product with the HP logo on it. I can buy a piece
of junk from another vendor, use it for 18 months and, suprisingly, never
have a crash traceable to their driver (in fact, never have a crash), and
when I get a new OS and no longer have support, I don’t feel bad throwing it
out because I paid so little for it. HP’s refusal to produce reliable
drivers or to support products across the lifetime of the product (not its
internal marketing lifetime; but over the lifetime it was designed to have),
means that I will not buy their products. And they call this a “good
business decision”. Furthermore, I do not recommened HP products to my
clients, and I use a logged crash from an HP driver as an example of how to
read a crash dump (it was the only third-party crash dump that existed on my
machine! In fact, the six crashes of that driver, all within a couple
hours, were the ONLY crashes I had in three years of XP usage).

A corporate vice-president should ask “What is this going to really cost us,
and how much is it going to save us?” One who doesn’t is incompetent. Only
an idiot would make a decision based on one of a dozen parameters. How much
will the lack of it add to the cost of a driver? How much will lack of it
add to the cost of deployment? How much will the lack of it add to
post-deployment support? How much will the lack of this part extend the
development cycle and delay time-to-market? These are among the questions
that should be asked, and I find that most people don’t even know how to ask
them. And the hardware manager says “No on my bottom line!” and passes the
problem to the driver team. Use of the event log to log failures takes
time, and training, and testing, and the driver manager says “not on my
bottom line!” and passes the problem to customer support. By that time it
is too late, and the customer support manager gets stuck with a massive
bottom line, AND ALL OF THESE DIVISIONS ARE PART OF THE SAME CORPORATION!
It’s all the same pocket, even if it comes from different wallets in that
one pocket.

I saw a situation in which a bug in the USB firmware meant that the device
would malfunction, but fixing it required going back to the firmware team to
get new firmware, and that was never going to happen because of the failed
concept of “cost”. My client was the end user of this failed product, and
it was a critical part of their deliverable. The project failed because
they had committed too much to a product that was badly designed. The
original vendor lost tens of thousands of units sales because my client’s
product would not be sold.

I imagine anyone who has been in the driver business longer than six months
can tell similar stories.
jor

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Tim Roberts
Sent: Monday, June 14, 2010 1:20 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] Write to screen inside kernel driver

On 6/14/2010 9:50 AM, xxxxx@osr.com wrote:

Truer words have never been spoken.

The performance opportunities lost, and CPU cycles wasted, all for lack of
a fifty cent part. Oh, the stories I could tell…

True. On the other hand, I was once in a meeting with a rep from a “major
printer manufacturer” who pointed out, “We ship a million printers a month.
If you want to add 5 cents to the cost of goods, you need the approval of a
corporate vice president.”


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

I experienced an extreme example of this kind of hardware design.

A peripheral card with an embedded CPU for device control used dynamic
memory for the on-card memory but to reduce cost the dynamic memory
controller chip was omitted and refresh was done in “software”. As designed
by the hardware genius the non-masked interrupt would trigger an interrupt
routine which would read 256 consecutive memory locations refreshing memory.

There was no timer chip on the board. All timing was done in “software”.

I was tasked to use the card to implement AppleTalk network protocols (the
old transformer coupled AppleTalk). I needed to time inta-packet times for
sending and receiving packets. The NMI would mess up any timer running and
memory contents (including all software) would evaporate if not refreshed.
I had to have the memory refresh moved to a maskable interrupt and provide
refresh in my timer loop when the interrupt was disabled.

This card interesting to debug since crash dumps could not be made from the
host - if the software crashed the refresh stopped and all traces of the
problem were gone.

I was young and stupid. When I was given the project the development lead
for the graphics group told me the card was a disaster and that he refused
to use the card as a graphics coprocessor the year before even though the
company had already built several hundred cards. He told me that every
instance of the card needed to be gathered in one place, all of them
reduced to a powder as fine as bread flour and than buried in a hazardous
waste landfill. I thought he was exaggerating. A few months later I
realized that it was an understatement.

Ed

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@osr.com
Sent: Monday, June 14, 2010 12:51 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Write to screen inside kernel driver

Truer words have never been spoken.

The performance opportunities lost, and CPU cycles wasted, all for lack of a
fifty cent part. Oh, the stories I could tell…

Peter
OSR


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Mike Kemp,

The “warnings for screwing up the windows screen handling” are for using text mode, not for using Direct Draw as any (old) game and TV card can use.
Windows is not on hold, interrupts still occur, just no thread switching.

There is no problem whatsoever, I’m entering and exiting the driver anytime, no damage to Windows, I’m accessing the disk inside the driver, etc. As I’m using DirectDraw as in my source code above, Windows is happy.
It’s absolutely real time, and I’m VERY happy with it :slight_smile:
Problem solved.

All those people who use this technical thread to moan and philosophize about Windows not being real time or their $0.05 experiences clearly have too much time on their hands :smiley:

On Tue, Jun 15, 2010 at 5:02 PM, **wrote:

It’s absolutely real time, and I’m VERY happy with it :slight_smile:

No, it’s not. The fact that you don’t do multiprocessing doesn’t mean
the thread that’s running is real time. You say you use Windows for
I/O. Windows I/O is not real time. Interrupts that do occur get
processed in an unknown time frame etc.

Basically you didn’t accomplish the RT requirements, while not beaing
a good Windows citizen.

You didn’t tell us what is your goal, why RT is important and why real
non-Windows OSes are not an option. You just made a big hack, and
what’s amazing and dangerous is that you are happy with it.


Aram Hăvărneanu**

Nice! This is a very interesting solution. Mr. Bottorff is a clever guy.

Has anybody who knows how Windows and Direct/X both work fully thought through this solution? I’m not implying Mr. Bottorff doesn’t know Windows, not at all… but he indicated that used this as part of a TEST procedure… I’m merely asking if anybody knows if what’s being done is generally appropriate and supportable.

I can’t see anything wrong with it, but I don’t know ANYthing about DirectDraw, except that it’s supposed to provide a reasonably “direct” path to a graphics surface and it’s commonly used for games. So, I’m wondering what this use of DirectDraw might cause in terms of side-effects.

You’re not actually complaining about the thread in which you got an answer to your problem are you? Cuz that would be, you know, ungracious. Sure, the thread wandered off-topic. But that does happen from time to time here.

Peter
OSR

Aram Havarneanu:

Hack? What hack? I haven’t hacked anything in Windows. I’m not modifying GDT, or IDT, or timer I/O ports or the APIC! I considered these things and they are not needed.

I do very simple things like putting thread priority 31, in all CPUs, and use Direct Draw screen access in a Kernel driver to update some status info. And it’s Realtime for me, does my job, thank you very much!!
I tested my driver on all sorts of Windows XP PCs and I’m very happy with it, now I will test on Vista.

Anyway, this thread is about displaying graphics in Kernel mode; it’s been solved, I’ve done it and published the way I did it for anyone else who needs to use it.

Thank you again Jan Bottorff for the DirectDraw direction.
I’ll get back to work now, other challenges to overcome.

On 6/15/2010 7:26 AM, xxxxx@osr.com wrote:

Nice! This is a very interesting solution. Mr. Bottorff is a clever guy.

Has anybody who knows how Windows and Direct/X both work fully thought through this solution? I’m not implying Mr. Bottorff doesn’t know Windows, not at all… but he indicated that used this as part of a TEST procedure… I’m merely asking if anybody knows if what’s being done is generally appropriate and supportable.

I can’t see anything wrong with it, but I don’t know ANYthing about DirectDraw, except that it’s supposed to provide a reasonably “direct” path to a graphics surface and it’s commonly used for games. So, I’m wondering what this use of DirectDraw might cause in terms of side-effects.

As I said, this is exactly the same solution SoftICE used to do their
one-system debugging (once DirectDraw became ubiquitous), so it does
have a legitimate pedigree. Many of us bitch about SoftICE, but their
user interface was pretty solid.

From a strictly theoretical standpoint, it’s not entirely safe. The
DirectDraw rules do not require that the frame buffer be freely
accessible at all times. The application is supposed to lock the
surface while it is writing, and unlock when it is done. The lock
process then allows the driver to wait for the graphics chip to complete
any outstanding drawing and make the surface accessible.

In the real world, however, this is simply not an issue. In all my
years of display driver work, I only encountered one graphics chip that
actually had an issue with arbitrary asynchronous access, and it was
from a vendor who left this arena in the 20th Century.

If our Original Poster were worried about a peaceful coexistence with
Windows, I’d have worries about Aero and the DWM compositor, but that’s
simply not a concern in his world. I suspect he has reached the Best
Possible Solution for him.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

On 6/15/2010 7:02 AM, b@if0.com wrote:

It’s absolutely real time, and I’m VERY happy with it :slight_smile:
Problem solved.

And this is good news.

All those people who use this technical thread to moan and philosophize about Windows not being real time or their $0.05 experiences clearly have too much time on their hands :smiley:

Well, in all fairness, you have a solution which is working for you in
one system, with one set of hardware, with one version of the operating
system. I personally have in my office a machine with an audio card
whose Vista driver routinely spins in a tight CPU cycle with interrupts
blocked for 50ms to 100ms at a time, thereby screwing up any real-time
processes (like video capture). A real-time operating system would not
permit that, but Windows will.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thanks for your comments, Tim. But I’m not sure of the message to take-away. On the one hand, I think you said it’s safe (aside from the theoretical concern about locking the surface, which doesn’t matter from a practical standpoint). OTOH, I heard you imply there could be problems in terms of Aero or Desktop Window Manager interoperability for general use.

So… is this solution OK or not OK? Being 100% ignorant of the graphics architecture, I still don’t understand. Or, is the answer not that clear-cut?

Sorry to be such a 'tard.

Now, as far as this:

Dude… SoftICE having done something is really no recommendation for correct operation to me. Heck, THOSE were the people who famously populated the OS with TONS of code hooks… to the point where there were so many hooks they couldn’t manage to keep up with OS code changes.

So, because SoftICE did something is, for me, pretty much no positive recommendation at all. In fact, probably quite the opposite. Seems to me those guys LIVED to deal with the Windows architecture in an irresponsible way. (Aside; In fact: Where is Alberto now that we need him to chime in here?)

Peter
OSR

On 6/15/2010 3:13 PM, xxxxx@osr.com wrote:

Thanks for your comments, Tim. But I’m not sure of the message to take-away. On the one hand, I think you said it’s safe (aside from the theoretical concern about locking the surface, which doesn’t matter from a practical standpoint). OTOH, I heard you imply there could be problems in terms of Aero or Desktop Window Manager interoperability for general use.

So… is this solution OK or not OK? Being 100% ignorant of the graphics architecture, I still don’t understand. Or, is the answer not that clear-cut?

“OK” is such a subjective term…

The original poster, in this case, did not care about whether the
Windows graphics system survived his machinations or not. In his case,
he’s fine. I haven’t really thought about this solution much since WDDM
was created. But as I sit here thinking about it, I know that DWM
expects to be able to do its composition at arbitrary times – an
operation which quite cheerfully overwrites everything that’s in the
visible frame buffer. My guess is that a DirectDraw primary
surface-style solution would be unhappy bunkmates with DWM.

Would it crash? No. I think you’d just see the two applications
overwriting each other, each thinking it owned the frame buffer.

Dude… SoftICE having done something is really no recommendation for correct operation to me.

I’ll grant that. However, for the most part their GUI worked pretty
darned well, across a surprisingly wide variety of graphics hardware.
Am I recommending this solution? No. Do I believe it can be made to
work? Yes.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

I still want to hear why the OP thinks he needs RT priority for his processing threads, and have them in kernel. He’s not gaining anything in the wall time, but loses the GUI in the process.

On Wednesday 09 June 2010 21:43:06 Jan Bottorff wrote:

> So you’ve turned your windows system into a single-tasking DOS box.

Bravo.

> I’m going to stop asking why.

Since we now have 8 or more processor cores on many machines, I actually
think it would be reasonable to have a way to give a driver dedicated use
of some of the cores (like a registry setting to reserve some cores during
boot and an API or two to start/stop executing on those cores). It seems a
bit silly that a $2 microcontroller can respond to hardware events faster
than a modern $300 x86 CPU. There are interesting hardware designs you
could do if you knew there was a processor sitting there polling the
hardware. This way, you could pretty easy have the rich features of an
Internet connected GUI operating systems, while you also get real time
hardware access.

Jan


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

TenaSys have a product that does something like that: it runs its own kernel
pinned to specific cores, so you can run Windows on the same machine as an
RTOS; from what I remember communication is done via a virtual network
adapter: http://www.tenasys.com/technologies/dualcore.php


Bruce Cran

One of the original uses of overlay DirectX surfaces were for things like
video capture hardware or hardware video decompression to write to. There
are certainly production apps that did this. I personally have only used
this in a development lab for some DMA testing. Primary surfaces were for
full screen video playback or games than wanted to render all the pixels
themselves.

It’s less clear what the visual effect of a DirectX primary surface
interacting with DWM is. Isn’t DWM supposed to give the illusion that every
window is a full screen, and then DWM dynamically scales them to the current
desktop view? In this case, a primary surface may be a virtual primary
surface, and DWM does something appropriate.

It does seem like poor GUI design to display a status bar for an hour, with
no way to accept input to cancel it, although I suppose it’s better than
what looks like a locked machine doing nothing for an hour. Even the
standalone memory test on Vista and later can be easily interrupted.

There is the risk the app that created the DirectX surface is terminated,
and the locked surface pointer perhaps becomes invalid. If the ioctl that
passed the pointer blocked in the driver, and the driver stopped writing to
the pointer if the handle to the device was closed, this seems pretty safe
(as safe as video capture preview). A primary surface does risk mangling the
GUI, although if all processors are running priority 31 threads the GUI is
locked up anyway. One thought, if you run the app over remote desktop, will
it really update the remote display? Apps that don’t work well on remote
desktop can be annoying. I currently do the bulk of my work over remote
desktop. My guess is if you run 31 priority threads on all cores, remote
desktop is toast. Personally I’d be concerned about the effect on system
stability of dominating all the processors at high priority.

It also seem likely an app/driver that takes over all the processors at
high priority for an hour is not the kind of product you will find on the
shelf at Best Buy. What you do on an embedded product, where you have (some)
control over the environment is rather different than apps for the general
public.

Jan

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:bounce-414789-
xxxxx@lists.osr.com] On Behalf Of Tim Roberts
Sent: Tuesday, June 15, 2010 3:41 PM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] Write to screen inside kernel driver

On 6/15/2010 3:13 PM, xxxxx@osr.com wrote:
> Thanks for your comments, Tim. But I’m not sure of the message to take-
away. On the one hand, I think you said it’s safe (aside from the
theoretical
concern about locking the surface, which doesn’t matter from a practical
standpoint). OTOH, I heard you imply there could be problems in terms of
Aero or Desktop Window Manager interoperability for general use.
>
> So… is this solution OK or not OK? Being 100% ignorant of the
graphics
architecture, I still don’t understand. Or, is the answer not that
clear-cut?
>

“OK” is such a subjective term…

The original poster, in this case, did not care about whether the Windows
graphics system survived his machinations or not. In his case, he’s fine.
I
haven’t really thought about this solution much since WDDM was created.
But as I sit here thinking about it, I know that DWM expects to be able to
do
its composition at arbitrary times – an operation which quite cheerfully
overwrites everything that’s in the visible frame buffer. My guess is
that a
DirectDraw primary surface-style solution would be unhappy bunkmates
with DWM.

Would it crash? No. I think you’d just see the two applications
overwriting
each other, each thinking it owned the frame buffer.

> Dude… SoftICE having done something is really no recommendation for
correct operation to me.

I’ll grant that. However, for the most part their GUI worked pretty
darned
well, across a surprisingly wide variety of graphics hardware.
Am I recommending this solution? No. Do I believe it can be made to
work?
Yes.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

I’d like to mention that I think it is an interesting technology application
to be able to switch from Windows to a realtime “all out” processing task
for a while, then revert to Windows. It doesn’t seem unreasonable to
repurpose the hardware for a short period. Of course the user could buy some
dedicated hardware, but it may not be as cost effective as it sounds in some
of these arguments for a manufacturer to make that hardware, and the cost
for a niche market hardware solution that provides the processing power of a
multicore intel processor running flat out in a dedicated hardware box with
oodles of RAM and some i/o is NOT going to be trivial (forget the 3 USD
folks). As a small manufacturer even if I could make the hardware cheaply,
with the regulatory support (EMC, UL, liability insurance, etc) and a
distributor network and support, it is a major undertaking, and frankly, who
is going to buy it?

I’d say that an ideal solution would be to let Windows run a UI in at least
one core, while the other cores are dedicated for a period, but it sounds
like that is not an option. If the OP’s idea does not violate any Windows
rules, then why not do it? It might not suit big companies who would rather
sell you a box, but for a small operator, who perhaps has the time to
implement the Windows driver without having to pay someone the suggested
100,000 USD to do it, it seems quite exciting.

As someone said, what is great is that Windows lets you do it (one hopes),
where a more proscriptive OS may not.

That’s why I was interested in some good reasons why it might not, after
all, be possible to do this safely. It seems there may be some technical
reasons why it should not be done, and a suggestion that it might fail at
the target of true RT operation (not sure of the implications of the onging
interrupt handling for example). However, if the majority of objections are
only philosophical, or show undue concern for the OPs business model and
success in the marketplace, then although they make interesting reading,
they should not limit the technical choice to do it.

A key thing here is swapping in and out of the Windows UI, with no need to
install other OSes, partition disks for mutliboot, reboot to get back to
normal work, etc, as such things are anathema to the user.

I have no plans for a solution like this at the moment, but the next time I
think of a processing task that I could put on someone’s desktop provided
they could spare a PC for the duration, I’d like to know if this really is a
solution that could be safely adopted…

Best, Mike

>>>>
----- Original Message -----
From: Joseph M. Newcomer
To: Windows System Software Devs Interest List
Sent: Monday, June 14, 2010 5:35 PM
Subject: RE: [ntdev] Write to screen inside kernel driver

This appears to be yet another of the endless “I could do so much more if I
could just get rid of that pesky operating system” class of questions.

It is a common failure mode to think that if you have a bare machine over
which you have exclusive control of every byte, that every other machine you
encounter must allow you do have direct access to everything in the same
way. It is one of the hardest things to teach a hardware designer, who is
likely to say “Well, when I have an embedded system with an x86 chip, I can
do and therefore you driver writers are just a bunch of idiots
because you can’t also do ” and ignores the fact that in general, no
general-purpose operating system (Windows, Unix, linux, Mac OS X, Solaris,
etc.) could EVER allow without compromising reliability, safety, file
system integrity, overall performance, screen management, etc.

Before you decide you MUST bypass the OS to “write directly to the screen”
(and I’d suggest using some internal interface such as Direct2D or
Direct3D), can you PROVE that it won’t work in a supported Windows
environment? I’m doing realtime display of mass spectrometer data without
even stretching what has to be done, so I don’t really buy the usual
excuses.

If you are using Vista or later, look into the MMCSS (Multimedia Class
Scheduler Service) as a means of increasing an app’s responsiveness. But
please, please, stop saying “I can do on a $3 dedicated chip that is
doing absolutely nothing else, and therefore I MUST be allowed to do
under an actual operating system that has to support a huge number of
unknown and unknowable applications, safely”. I get tired of hearing this.
What is truly sad is the number of times I have to explain to hardware
designers “that cannot possibly work in a real operating system” and have
them tell me that it is “not their problem” that real operating systems
won’t support their personal agenda, and if they create hardware that would
work in a real operating system, it might cost an extra $1 or $1.50 per
board. Of course, it doesn’t matter that every driver is going to cost an
extra $100,000 to develop, or just possibly not even be possible. If it
runs on a bare machine, that’s validation of the design! (I first
encountered this attitude in a RISC design that was undebuggable, and the
engineers kept saying “the” software would track everything; so there was no
way to read out the stack pointer! I kept trying to explain that there is
no “the” software, and that a belief that “the” software existed was
ill-founded, and I gave counterexamples, which had no effect because the
design was already perfect; never mind that I could not imagine how to write
a debugger, either in-process or out-of-process. The design got no
acceptance in the marketplace, and disappeared by the late 1980s, because
there was no debugger delivered with the C compiler, and a promise that one
never would be)
joe

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Pavel A.
Sent: Thursday, June 10, 2010 5:26 AM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] Write to screen inside kernel driver

“Jan Bottorff” wrote in message
news:xxxxx@ntdev…

> Since we now have 8 or more processor cores on many machines, I
> actually think it would be reasonable to have a way to give a driver
> dedicated use of some of the cores (like a registry setting to reserve
> some cores during boot and an API or two to start/stop executing on
> those cores). It seems a bit silly that a $2 microcontroller can
> respond to hardware events faster than a modern $300 x86 CPU. There
> are interesting hardware designs you could do if you knew there was a
> processor sitting there polling the hardware. This way, you could
> pretty easy have the rich features of an Internet connected GUI
> operating systems, while you also get real time hardware access.
>
> Jan

IMHO this is a good idea. QNX or some other RT OS I’ve read about, seem to
have this ability.
–pa


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> it is an interesting technology application to be able to switch from Windows to a

realtime “all out” processing task for a while, then revert to Windows.

You need to provide your own HAL that deals with the logic of interrupt handling, spinlock acquisition and IRQL. Then you will be able to run Windows as a task in context of your RTOS that will let it run whenever it feels like. Although the task in itself is certainly exciting, there is nothing basically new here - it has been done more than once…

I’d say that an ideal solution would be to let Windows run a UI in at least one core, while the other
cores are dedicated for a period, but it sounds like that is not an option.

As long as you provide your own custom HAL you will be able to do it just fine without getting into any conflict with Windows…

Anton Bassov

On Thursday 10 June 2010 10:48:03 Hagen Patzke wrote:

Hmmm… what about running a Windows GUI in a VM on a system that can
be configured to dedicate one (or more) of its processors to an RTOS?

Not as “clean” as having a Windows box with e.g. a PCI card running
another processor/RTOS, or as having one box for the GUI and another one
for RTOS processing.

It’s been done - see http://www.tenasys.com/products/evm.php for one example:

“eVM for Windows embedded virtualization platform provides a bare metal
virtual machine environment that hosts an embedded or real-time operating
system alongside Windows on the same multi-core processor platform.”


Bruce Cran

On 6/16/2010 12:15 AM, Jan Bottorff wrote:

It’s less clear what the visual effect of a DirectX primary surface
interacting with DWM is. Isn’t DWM supposed to give the illusion that every
window is a full screen, and then DWM dynamically scales them to the current
desktop view? In this case, a primary surface may be a virtual primary
surface, and DWM does something appropriate.

I suppose it would be theoretically possible to build such a device, but
that’s not how real graphics chips are implemented.

DWM is nothing more than a full-screen Direct3D application. GUI
applications draw their windows into textures. DWM then creates a
Direct3D playlist that maps each window texture to a rectangle on the
screen corresponding to its desktop location. With every
Direct3D-capable graphics card today, that operation is done by having
the graphics chip merge the background and all of the textures into the
final, visible frame buffer.

At the bottom end, your graphics chip has to refresh the screen by
sending all of the pixels out your monitor cable 75 times a second. The
only practical way to do that today is to have the visible screen in a
chunk of ordinary video RAM that the device can continuously read from.
Theoretically, you could have the screen refresher run through the D3D
playlist and reconstruct the composited image on the fly at every
refresh, but that’s not how it’s done today.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

On Wed, Jun 16, 2010 at 3:22 PM, wrote:
>
>> it is an interesting technology application to be able to switch from Windows to a
>> realtime “all out” processing task for a while, then revert to Windows.
>
> You need to provide your own HAL that deals with the logic of interrupt handling, spinlock acquisition and IRQL. Then you will be able to run Windows as a task in context of your RTOS that will let it run whenever it feels like. Although the task in itself is certainly exciting, there is nothing basically new here - it has been done more than once…

Do you have any link to something like this? How is that achieved,
since the HAL interfaces are not public. How can one develop a HAL
without NTOS source code? Or do the companies that provide custom HALs
have access to the source code?


Aram Hăvărneanu