This seems an odd statement. The whole idea of writing a driver is to
support applications! A driver that cannot support its applications is a
piece of code of no value whatsoever!
It is assumed that an app can terminate abnormally. This is one of the
standard scenarios that every driver must deal with! This is why we have
IRP_MJ_CLEANUP and IRP_MJ_CLOSE handlers. This is why we have to handle IRP
cancellation. What’s the news that makes this different?
Driver design is a continuum; sometimes you push the line up so that there
is much more work in the driver to meet a spec; sometimes you push the line
down so the driver is lean and simple, and write a more complex app. For
example, the ability to do a WriteFile with a 1GB buffer. Consider the
target machine might have 512MB of memory. You can do a lot of work using
mode Neither in the driver, with driver threads to lock down partial MDLs,
or you can write a library that does a WriteFile, and if it fails because of
insufficient kernel resources, splits it into two WriteFiles, and so on
using binary cut to split the request until it can be satisfied. The result
is a simpler driver, and months shaved from the development schedule. For
high performance, you can use asynch I/O, do a bunch of ReadFile operations
all at once, and write a simple, classical driver that has no internal
buffering. Or you can build a driver that can handle long delays in the
application by having massive internal buffering. All of these solutions
represent different tradeoffs in driver and application complexity.
One of the serious defects in thinking about driver design is thinking that
ReadFile, WriteFile, and DeviceIoControl are the interface spec. They are
not. They are the *implementation* of the interface spec. Sometimes, the
interface spec is simple enough that we make these visible to the
application programmer. Sometimes we shouldn’t, and a persistent failure in
design is to think that writing a library to interface to a device is
someone else’s problem. How many programmers ever issue a DeviceIoControl
for a serial port? Or do they use Get/SetCommState, Get/SetCommTimeouts,
and so on? I wouldn’t have a clue as to how to create a DeviceIoControl for
a serial port, but a good percentage of my income over the last 20 years
came from writing applications that interfaced to devices that had serial
port interfaces. As a driver writer, the interface to the driver is *your*
responsibility. This means that what you present to your application
programmers is not necessarily ReadFile, WriteFile and/or DeviceIoControl.
It is what is needed to support the application.
One of the serious defects in thinking about applications is thinking that
they should do synchronous I/O from the main GUI thread. There are two
serious errors in thinking here: doing synchronous I/O as the only way of
life, and using only a single thread. Dedicating a high-priority thread to
handling I/O can often solve the problem entirely; using asynch I/O can
often solve the problem entirely. I have built solutions for
high-performance devices where I have used both techniques. Putting
internal buffering in the driver is another point in the solution space.
Ignoring the requirements of the application is simply stupid. The key here
is that the goal is to *solve a problem*, not *write a driver*. Writing the
driver is an implementation detail in achieving that goal. So it is
*impossible* to create a driver that does not accommodate the needs of the
application! (The alternative proposed here seems to say “The goals of the
project are irrelevant, as long as there is a software artifact that can
communicate with the device; never mind that it is unusable in the context
of the running application”)
Application architecture interacts with hardware design in subtle ways. All
too often we end up at the point where the designer has created something
that cannot be programmed in any real OS (Windows, Mac OS X, linux, Unix,
Solaris…) because the engineer simply didn’t understand that real software
does not respond the same way that a standalone MS-DOS system could
(seriously, I’ve been told, “All you need to do is reprogram the
counter/timer chip, and hook the interrupt, and if you are careful to not
have to do a stack switch or save all the registers it is *easy* to make the
timing window. Look here…” and they show a piece of code written in their
MS-DOS testbed machine where the application is fielding the interrupts!)
But it is essential that the driver and application be co-developed so the
application needs are met. It is better if the hardware, driver, and
application are co-developed. I once had to explain to a physicist that
sending a byte stream of floating-point values on the serial port from the
embedded controller was *not* an “interface”, and had to spec out a
packet-sending protocol with headers, byte counts, ack/nak, checksums, etc.
before I even started coding the app)
A driver that cannot accommodate an application has no reason to exist. In
software, we call this the “reality test”. Drivers are not abstract
entities that live in a vacuum; they live in an ecosystem which includes
applications, schedulers, threads, and potentially arbitrary delays. Each
development must take into account the hardware device, the interface, the
kernel features, and the user-mode features and create a solution that
end-to-end (hardware to application) meets the needs of the solution space.
To think that you can write a driver without understanding the hardware AND
the application strikes me as sheer folly.
joe
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Tuesday, May 26, 2009 12:21 AM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Can I launch a DMA operation in DPC routine ?
They ask me to setup a buffer in my dirver, then the gathering data stored
in the buffe firstly,
when the upper application need, it read the buffer.
I think it is not really a great idea to write your driver in a way that it
is suitable for UM application - instead, it should be the other way around.
For example, consider the scenario when an app terminates abnormally (or,
even worse, just get bogged for some reason so that it just has no chance to
retrieve data in time) - if you want to adjust your driver to app’s needs
you will have to take all these “abnormalities” into account when writing
your code…
Anton Bassov
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
–
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.