I cannot subscribe to nttalk because I do not allow any form of scripting,
such as JavaVirus, to pass three levels of firewall. A friend subscribed me
to these newsgroups, and I don’t even know how to log into the OSR site.
Once you’ve been taken out for a week (and a year of cleanup) by a nasty
scripting virus, you get more than a little twitchy about allowing
client-side scripting (which I consider a hostile attack)
I can accept bugs in hardware. What I can’t accept are designs that are
completely unrealistic (recent discussion in one forum: “I have this device
which connects to the printer port, and I have to poke it every 100
microseconds. How can I do this from an application? I can’t seem to find
a timer with resolution of less than 1ms!” An obvious example of a device
specification that was totally unrealistic, and an notion that an
application can do something repeatedly every 100us without fail goes beyond
unrealistic into high fantasy). The sort of drivers you are describing are
clearly buildable, even if we don’t have good specs.
A custom processor chip I consulted on some years ago did not allow the
stack pointer to be read; if you tried to read the stack pointer register,
you got some unrelated value, some CPU status register. I asked “How do I
read the stack pointer?” and was assured by the engineer, “Oh, the software
tracks it for you!” Since I was supposed to write “the” software [in fact,
a debugger], I asked how it was supposed to do that. “No problem, software
does that sort of thing all the time!” Ultimately, I had to turn the job
down because there was not enough support in the hardware to make it
feasible to write a debugger. The myth that “the” software (application?
OS? Debugger?) magically exists is the major weakness of hardware engineers,
who somehow think we have magical powers to accomplish things that should be
supported by the hardware. Note: you never heard of this chip, and to me it
is obvious why. Nobody could actually write support software for the
developers. It disappeared without a trace in the mid-1990s.
As one of the “vendors” who is often requested to build a driver for a
device that cannot possibly work, I’ve had to tell several potential clients
that the design was unprogrammable, and anyone who claimed otherwise was
probably stealing their money, and I wouldn’t touch the project. Others who
are more active in writing drivers have their own horror stories to tell.
The issue is not the drivers we see (although the old non-WDF smart card
driver was so full of design errors as to be a great example of how to make
every mistake in the book), but in the hardware we don’t see because nobody
could build a driver for it under any conditions imaginable (the 100us from
application space example should never have gotten beyond the paper-design
stage if anyone competent in software had been called in before it was
built).
I once spent three days reverse-engineering one peripheral because there was
no way to set it to its initial state, or query what state it was in; I had
to guess what state it was in by poking values at registers and reading
other values out of other registers, and developing a state machine model of
what states produced what results. When I asked the designer why there was
no way to discover the state or reset it, he assured me it was not necessary
because “the” software would always track it. When I pointed out the
obvious, that I was writing “the” software and always needed to know what
state it was in because the user could do weird things in the application
and then unload the driver, I was assured that their one software expert
said that a state register was a waste of gates because he knew how to track
the state. I asked if I could talk to him, so he could explain to me how he
used psychic vibrations to infer the state. Note that a reboot could not
reset the card; so while debugging the driver, I had to shut off the machine
and completely reboot on each test. Ultimately, the power cycling destroyed
my (in those days monochrome) display, so I had to add three days of
development time so I would not lose the replacement display as well. Since
the user could hard-reboot at any time (this was not Windows!) and leave the
card in an indeterminate state (it did not reset on a bus-reset signal!), I
still had to work correctly in spite of this. So the scenario was not
restricted to just doing driver development.
Hardware design is too important to leave to hardware designers!
joe
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Prokash Sinha
Sent: Saturday, November 20, 2010 11:58 AM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] Memory allocation failure
Joe,
You bring up some interesting points, and I guess it comes from vast amount
of experience. With due respect, this should go to nttalk. I think you can
subscribe to that using osronline site !
It is not that HW engineering is so much of a problem. It is the
specification and abstraction they often forget to provide. And of course
there are latent bugs in chipset or in the firmware/bootcodes. Though I see
almost all chip vendors who are specialized in the FC, NIC, and switch
products have some published source code in the Linux kernel tree, and they
often keep them updated - the clarity of the codes and their underlying
mechanics are not explained… But then again, some of the devices are so
complex that writing a good specification would take 3 to 4 of the 2.5
inches folder - printing 2 soft pages into one side of a printed page.
In sheer number of pages, 5000 plus.
This becomes very hard when it comes to adapt those chipset in to new
technologies that are not yet specification wise complete by the standard
body like PCI sig, IEEE working groups…
For commodity type peripheral, usually the vendors provide most drivers!
-pro
On Nov 19, 2010, at 9:58 PM, Joseph M. Newcomer wrote:
This is probably because the address space has become fragmented. A
trick Ed Dekker did some years ago was to write two drivers, one of
which loaded early and reserved the space, and the other of which he
could later load and unload dynamically (while testing) and which
asked the first driver for a pointer to the block of memory it
allocated. The trick was to force the first driver to load early (it had
to be a boot-time driver, for example).
The hazard of memory address fragmentation is that you have virtually
(pun
intended) no control over when and how it happens once you start
letting other drivers load. I’m not sure how this would be pulled off
in a WDF context but others should be able to answer that.
The problem he faced was that the DMA card didn’t have any
scatter/gather capability and required a massive contiguous buffer.
This is the inevitable consequence of letting hardware engineers make
design decisions; for example, this decision probably saved $2/card on
a product that might have had 1000 cards produced, so the engineer saved
$2,000 in development costs.
This increased the cost of writing the driver by substantially more
than $2,000. Hardware engineers need to understand that their
decisions have serious implications in the context of real operating
systems (for example, we’ve both seen devices for which it would not
be possible to write a device driver in any real operating system,
such as Windows, Unix, linux, Solaris, Mac OS X, etc., but the
engineer who designed the card failed to see that there was any
problem because he could write a driver on his bare MS-DOS machine or
under the embedded RTOS on his desktop development system, by
hijacking interrupts, reprogramming the counter-timer chip, etc. The
number of failures I’ve seen like this in the last 20 years is astounding,
because hardware engineers come equipped with “hardware engineer blinders”
pre-installed, rendering them incapable of seeing firmware, software, etc.
as issues that need to be addressed in the design)
joe
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@gmail.com
Sent: Friday, November 19, 2010 2:10 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] Memory allocation failure
Hi,
This memory allocation function fails all the time. Can some one
please help why its actually failing.
PHYSICAL_ADDRESS lowAddr;
PHYSICAL_ADDRESS highAddr;
PHYSICAL_ADDRESS skipBytes;
size_t allocated;
PMDL mdl;
// Set max/min physical memory addresses:
lowAddr.QuadPart = 0;
highAddr.QuadPart = (ULONGLONG)-1;
skipBytes.QuadPart = 0x200000;
// We’re going to have to create an MDL for each frame!!!
// This is zero filled, non-paged main memory pages:
mdl = MmAllocatePagesForMdlEx(lowAddr, highAddr, skipBytes ,
0x200000,0,MM_ALLOCATE_PREFER_CONTIGUOUS|MM_ALLOCATE_REQUIRE_CONTIGUOU
S_CHUN
KS);
This always returns a NULL.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
–
This message has been scanned for viruses and dangerous content by
MailScanner, and is believed to be clean.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
–
This message has been scanned for viruses and dangerous content by
MailScanner, and is believed to be clean.