Can I launch a DMA operation in DPC routine ?

by the way , I use the WDM modle.

Initiating a DMA from DPC is ok.

Calvin

— On Mon, 5/25/09, xxxxx@gmail.com wrote:

> From: xxxxx@gmail.com
> Subject: [ntdev] Can I launch a DMA operation in DPC routine ?
> To: “Windows System Software Devs Interest List”
> Date: Monday, May 25, 2009, 7:39 PM
>
> ? I? am writing a dirver of a PCI AD card. The
> design of the card is common .It based on the PLX9054 , and
> there is a FIFO on the local side, when the fifo is
> half? full? it triges the interrupt , then the dirver
> launch a DMA to get the data.
> ?
> ? At first , I followed the Packet Based DMA method ,
> and the upper application writer told me their software
> can’t responds enough quickly. They ask me to setup a buffer
> in my dirver,? then the gathering data stored in the
> buffe firstly, when the upper application need, it read the
> buffer.
>
> ? If I adhibit this method , It is the driver , not
> the application become the original DMA launcher. So , the
> dirver must perform a DMA opertion automatically after each
> “half full interrupt” occurs.? The only way? and
> the only palace I can imaging is the DPC where begin a DMA
> Read opertion.? ? ?
> ? IS IT Workability?? thanks.
>
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars
> visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online
> at http://www.osronline.com/page.cfm?name=ListServer
>

> They ask me to setup a buffer in my dirver, then the gathering data stored in the buffe firstly,

when the upper application need, it read the buffer.

I think it is not really a great idea to write your driver in a way that it is suitable for UM application - instead, it should be the other way around. For example, consider the scenario when an app terminates abnormally (or, even worse, just get bogged for some reason so that it just has no chance to retrieve data in time) - if you want to adjust your driver to app’s needs you will have to take all these “abnormalities” into account when writing your code…

Anton Bassov

Thanks, Anton

what the " UM application" meams?

>what the " UM application" meams?

user-mode application, apparently - what else do you think it may mean???

Anton Bassov

thanks.

I plan to set up a Dual Buffer, after the DMA has filled one , it call the UM APP to get the data, then it fill the other one , and cross like this . If two buffer have been fill, and the UM APP doesn’t get the data , then the driver stop doing DMA.
I think these method can reduce the User - kernel mode swith burden.

Or, can you give me more suggestion ?
Also, i there would be none panic if I launch a DMA operation in DPC routine , right?

> Or, can you give me more suggestion ?

The suggestion is standard - use inverted call. In this particular case it seems to be the most convenient approach. Make your app submit few requests to a driver in advance, and driver will pend them. When data is available a driver will simply complete the outstanding request, if any. If there are no requests a to complete at the moment a driver will do nothing…

Anton Bassov

Have you had a chance to perform reasonable quantitative study to see where the bottle neck is and how your solution is going to address the problem?

Also, i there would be none panic if I
launch a DMA operation in DPC routine , right?

Windows doesn’t panic. The closest behavior is bsod. Initiating a dma from dpc per se will not bsod and it’s perfectly alright.

Calvin Guan
Broadcom Corp.
Connecting Everything(r)

— On Mon, 5/25/09, xxxxx@gmail.com wrote:

> From: xxxxx@gmail.com
> Subject: RE:[ntdev] Can I launch a DMA operation in DPC routine ?
> To: “Windows System Software Devs Interest List”
> Date: Monday, May 25, 2009, 10:21 PM
>
> ? thanks.
>
> ? I plan to set up a Dual? Buffer,? after
> the DMA has filled one , it call the UM APP to get the
> data,? then it fill the other one , and cross like this
> .? If two buffer have been fill, and the UM APP doesn’t
> get the data , then the driver stop doing DMA.
> ? I think these method can reduce the User - kernel
> mode swith burden.
>
>
> ???Or, can you give me more suggestion ?
> ???Also, i there would be none panic if I
> launch a DMA operation in DPC routine , right?
>
>
> ?
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars
> visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online
> at http://www.osronline.com/page.cfm?name=ListServer
>

I think that’s just abut the worst advice I’ve seen given on NTDEV in a long time, and that’s really saying something.

It is entirely appropriate, even praiseworthy, to design ones driver for the convenience and ease of implementation of the user-mode program. I would suggest that to do otherwise is poor engineering practice.

The OP’s originally proposed design is a good, common used, and should work fine. Launching a DMA request from within a DPC is not a problem.

Peter
OSR

This seems an odd statement. The whole idea of writing a driver is to
support applications! A driver that cannot support its applications is a
piece of code of no value whatsoever!

It is assumed that an app can terminate abnormally. This is one of the
standard scenarios that every driver must deal with! This is why we have
IRP_MJ_CLEANUP and IRP_MJ_CLOSE handlers. This is why we have to handle IRP
cancellation. What’s the news that makes this different?

Driver design is a continuum; sometimes you push the line up so that there
is much more work in the driver to meet a spec; sometimes you push the line
down so the driver is lean and simple, and write a more complex app. For
example, the ability to do a WriteFile with a 1GB buffer. Consider the
target machine might have 512MB of memory. You can do a lot of work using
mode Neither in the driver, with driver threads to lock down partial MDLs,
or you can write a library that does a WriteFile, and if it fails because of
insufficient kernel resources, splits it into two WriteFiles, and so on
using binary cut to split the request until it can be satisfied. The result
is a simpler driver, and months shaved from the development schedule. For
high performance, you can use asynch I/O, do a bunch of ReadFile operations
all at once, and write a simple, classical driver that has no internal
buffering. Or you can build a driver that can handle long delays in the
application by having massive internal buffering. All of these solutions
represent different tradeoffs in driver and application complexity.

One of the serious defects in thinking about driver design is thinking that
ReadFile, WriteFile, and DeviceIoControl are the interface spec. They are
not. They are the *implementation* of the interface spec. Sometimes, the
interface spec is simple enough that we make these visible to the
application programmer. Sometimes we shouldn’t, and a persistent failure in
design is to think that writing a library to interface to a device is
someone else’s problem. How many programmers ever issue a DeviceIoControl
for a serial port? Or do they use Get/SetCommState, Get/SetCommTimeouts,
and so on? I wouldn’t have a clue as to how to create a DeviceIoControl for
a serial port, but a good percentage of my income over the last 20 years
came from writing applications that interfaced to devices that had serial
port interfaces. As a driver writer, the interface to the driver is *your*
responsibility. This means that what you present to your application
programmers is not necessarily ReadFile, WriteFile and/or DeviceIoControl.
It is what is needed to support the application.

One of the serious defects in thinking about applications is thinking that
they should do synchronous I/O from the main GUI thread. There are two
serious errors in thinking here: doing synchronous I/O as the only way of
life, and using only a single thread. Dedicating a high-priority thread to
handling I/O can often solve the problem entirely; using asynch I/O can
often solve the problem entirely. I have built solutions for
high-performance devices where I have used both techniques. Putting
internal buffering in the driver is another point in the solution space.

Ignoring the requirements of the application is simply stupid. The key here
is that the goal is to *solve a problem*, not *write a driver*. Writing the
driver is an implementation detail in achieving that goal. So it is
*impossible* to create a driver that does not accommodate the needs of the
application! (The alternative proposed here seems to say “The goals of the
project are irrelevant, as long as there is a software artifact that can
communicate with the device; never mind that it is unusable in the context
of the running application”)

Application architecture interacts with hardware design in subtle ways. All
too often we end up at the point where the designer has created something
that cannot be programmed in any real OS (Windows, Mac OS X, linux, Unix,
Solaris…) because the engineer simply didn’t understand that real software
does not respond the same way that a standalone MS-DOS system could
(seriously, I’ve been told, “All you need to do is reprogram the
counter/timer chip, and hook the interrupt, and if you are careful to not
have to do a stack switch or save all the registers it is *easy* to make the
timing window. Look here…” and they show a piece of code written in their
MS-DOS testbed machine where the application is fielding the interrupts!)
But it is essential that the driver and application be co-developed so the
application needs are met. It is better if the hardware, driver, and
application are co-developed. I once had to explain to a physicist that
sending a byte stream of floating-point values on the serial port from the
embedded controller was *not* an “interface”, and had to spec out a
packet-sending protocol with headers, byte counts, ack/nak, checksums, etc.
before I even started coding the app)

A driver that cannot accommodate an application has no reason to exist. In
software, we call this the “reality test”. Drivers are not abstract
entities that live in a vacuum; they live in an ecosystem which includes
applications, schedulers, threads, and potentially arbitrary delays. Each
development must take into account the hardware device, the interface, the
kernel features, and the user-mode features and create a solution that
end-to-end (hardware to application) meets the needs of the solution space.
To think that you can write a driver without understanding the hardware AND
the application strikes me as sheer folly.
joe

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Tuesday, May 26, 2009 12:21 AM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Can I launch a DMA operation in DPC routine ?

They ask me to setup a buffer in my dirver, then the gathering data stored
in the buffe firstly,
when the upper application need, it read the buffer.

I think it is not really a great idea to write your driver in a way that it
is suitable for UM application - instead, it should be the other way around.
For example, consider the scenario when an app terminates abnormally (or,
even worse, just get bogged for some reason so that it just has no chance to
retrieve data in time) - if you want to adjust your driver to app’s needs
you will have to take all these “abnormalities” into account when writing
your code…

Anton Bassov


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

This is a very wise statement. Seriously. And it would likely only come from somebody who’s spent some time living on the application side of life, yet understands kernel-mode as well.

I could hold forth on the whole issue here about how read, write, and IOCTL have outlived their usefulness as the proper interface for an I/O subsystem… but I’ll restrain myself.

Peter
OSR

Joseph M. Newcomer

excellent post!! i really learned a lot by reading it :slight_smile:

regards
deep

> It is entirely appropriate, even praiseworthy, to design ones driver for the convenience

and ease of implementation of the user-mode program. I would suggest that to do
otherwise is poor engineering practice.

So you reckon it is a good idea to to write a driver that suits particular app’s needs, instead of relying upon well-known and well-define protocols of KM/UM communication that are meant to work with any application that supports them???

Why do you think a driver should be bothered about app’s inability to process all data in time and
adjust its internal operations to particular app’s needs (for example, queue data even if there are no outstanding requests at the moment)???

Anton Bassov

xxxxx@hotmail.com wrote:

> It is entirely appropriate, even praiseworthy, to design ones driver for the convenience
> and ease of implementation of the user-mode program. I would suggest that to do
> otherwise is poor engineering practice.
>

So you reckon it is a good idea to to write a driver that suits particular app’s needs, instead of relying upon well-known and well-define protocols of KM/UM communication that are meant to work with any application that supports them???

Once again, you have commandeered the train of reasonable discourse and
piloted it straight off of the rails and over the cliff into Loonyville.

Kernel drivers do not exist to play quietly by themselves, and users do
not buy machines in order to run kernel drivers… A driver’s JOB is to
service user-mode clients, and they absolutely need to be subservient to
those clients. There are a vast number of devices for whom there simply
ARE no “well-known and well-define protocols”. In that case, of course
every precaution should be taken to make sure that the user-mode
implementation is easy and convenient. I agree with Peter – doing
otherwise is poor engineering practice.

Make the interface as simple as possible, but certainly no simpler.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

> A driver’s JOB is to service user-mode clients, and they absolutely need to be subservient

to those clients.

This is what “well-known and well-define protocols”(namely, ReadFile()WriteFile()/DeviceIoControl() system calls are for)…

There are a vast number of devices for whom there simply ARE no “well-known
and well-define protocols”.

Wel, if driver does not support these “well-known and well-define protocols” it is simply unaccesible by the clients - as simple as that…

Make the interface as simple as possible, but certainly no simpler.

Look - I am not speaking about interfaces, do I…

What I am speaking about is adjusting driver operations to particular client’s needs ( for example, sharing a buffer; making a driver queue data because an app is unable to process it in time,etc)…

In context of this problem, would not it be better to make an app create a dedicated thread that deals solely with data retrieval from the driver??? If the app is unable to process data in time , don’t you think it is better to make an app deal with this problem on its own??? After all, an app should know better how to deal with it ( to queue data, or to stop submitting requests to a driver for a while, or whatever it finds appropriate). …

Anton Bassov

>> There are a vast number of devices for whom there simply ARE no "well-known

> and well-define protocols".

Wel, if driver does not support these “well-known and well-define protocols” it
is simply unaccesible by the clients - as simple as that…

Tim said 'device; ’ you said ‘driver.’ Big difference.

mm

> Once again, you have commandeered the train of reasonable discourse and piloted it straight

off of the rails and over the cliff into Loonyville. Kernel drivers do not exist to play quietly by themselves, >and users do not buy machines in order to run kernel drivers…

Actually, I am not sure which of us “piloted it straight off of the rails and over the cliff into Loonyville”…

Let’s take a look at some practical example. All electric cables in your building are of no use on their own - they are meant to provide power to your electrical appliances. However, these appliances are supposed to support “well-known and well-define protocols” (in this context, be designed for a particular voltage). Let’s say you have bought some appliance that expects the voltage of 127 V, but the standard voltage in your building is 220 V. What are you going to do??? You have two options:

  1. Buy a transformer/adapter and plug your non-standard device into it

  2. Call your electricity provider and ask them to change everything that is related to electricity in your building so that you will be able to use non-standard device

Judging from what you and Peter are saying, you would rather go for the latter option…

Anton Bassov

You annoy me when you talk like this, Anton. It shows how little real-world experience you actually have.

So, we should NEVER buffer data in the driver then. Because, every operation by the user should be paired with one by the device? Wow, that’d make the keyboard driver pretty hard to use.

In fact, by your logic, we shouldn’t ever put a FIFO in a device. There’s a register that returns the data. If your driver can’t get it in time, too bad?

The OP has control of both the driver-side and the app-side. There’s absolutely no reason why the driver shouldn’t be changed to accommodate what’s pleasant, easy, and reasonable from the application.

It doesn’t hurt the design, and it doesn’t reduce the performance (much).

There’s nothing wrong in that, and all kinds of right as far as I see.

Peter
OSR

> You annoy me when you talk like this, Anton. It shows how little real-world experience

you actually have.

Well, I DO know that you get annoyed with any opinion on any subject that you don’t share, so that you immediately start throwing in “ad hominem” arguments. However, it does not necessarily imply that the opinion is wrong, does it…

So, we should NEVER buffer data in the driver then. Because, every operation by the user
should be paired with one by the device?

Oops, as I can see, “ad hominem” argument is just a part of your arsenal of logical fallacies that
you rely upon in this discussion - this one is just a classical example of yet another logical fallacy known as “Straw Man argument”…

Certainly as long as buffering data fits into driver’s logic, it is perfectly fine - who would even argue about that…

Wow, that’d make the keyboard driver pretty hard to use.

And not only a kbd…

NIC is yet another example of a device where buffering is perfectly reasonable…

The OP has control of both the driver-side and the app-side. There’s absolutely no reason why
the driver shouldn’t be changed to accommodate what’s pleasant, easy, and reasonable
from the application.

You seem to be quite impressed by Joe’s post. Funny enough, but you somehow overlooked his statement about mulithreading and asynch IO - indeed, if an app makes a proper use of these techniques
the problem that the OP speaks about just would not arise, and this is why I was speaking about dedicated app’s thread in my reply to Tim.

What you propose is to change driver design in order to meet the needs of an app which is, apparently, just unreasonably designed in itself. Apparently, it just processes the whole thing synchronously in context of a single thread, so that there is no wonder it is unable to catch up with data production rate…

This is the only reason why I came up with the example of buying adapter vs changing the whole electricity installation - instead of fixing an app you propose to change driver’s code in order to make it suitable for unreasonably designed application…

Anton Bassov

>

You annoy me when you talk like this, Anton. It shows how little
real-world
experience you actually have.

So, we should NEVER buffer data in the driver then. Because, every
operation
by the user should be paired with one by the device? Wow, that’d make
the
keyboard driver pretty hard to use.

In fact, by your logic, we shouldn’t ever put a FIFO in a device.
There’s a
register that returns the data. If your driver can’t get it in time,
too bad?

The OP has control of both the driver-side and the app-side. There’s
absolutely no reason why the driver shouldn’t be changed to
accommodate what’s
pleasant, easy, and reasonable from the application.

It doesn’t hurt the design, and it doesn’t reduce the performance
(much).

There’s nothing wrong in that, and all kinds of right as far as I see.

Have we reached the point where this particular branch of this
discussion belongs on the other list? If not, I’m sure it will be
reached soon :slight_smile:

James