> We really need to understand DPC latency in depth since hardware
engineers
usually don’t count for the worst case scenario, also we are getting lot
of
hardware design request recently that used be for real time OS. I hope
Microsoft will let you publish the article in The NT Insider.
I think you’ll have to REALLY consider what you mean by “DPC latency” in
this case. The WORST case with a selected set of drivers can be empirically
tested, but if anyone can install any driver after your hardware has been
installed, you’re looking at an “infinite” worst case (ok, so if someone
starts doing 2000 millisecond delay-loops in DPC, some users may start to
think about getting another driver or change the computer or something, but
it’s not impossible for someone to accomplish this inside a DPC, and there
certainly aint anything inside windows that prevents it from happening).
Naturally, your system can not be expected to work with “infinite” DPC
latency, so you must somewhat limit the scope at which your hardware can be
expected to work with “any system”.
I think there’s several ways to solve this, but ultimately, you will have
to rely on other drivers being “reasonable” in the use of DPC’s. Otherwise,
you will have to have something like 400KB * 8 * 20 seconds => 64000 KB of
memory on your device (and maybe that won’t be enough). This obviously
starts becomming rather silly (unless you’re building a graphics card, in
which case I’d say that 64MB is the very minimum you should consider
The approach of having large host memory buffers (i.e using system ram) is
a good one. It shouldn’t be too difficult to design a chip that uses PCI
(or PCI Express) to access the host memory and store/load data from that
path. I’m pretty sure there are plenty of documentation available on how to
do that, although I only have access to our internal documentation, which
would get me into serious trouble if I hand out. By using host memory
rather than adding memory to the board itself gives you freedom to expand
the memory size at will, without any big effort at all. Just change some
#define in the driver code, and you suddenly have 2, 4, 8, 13 or 42 times
more memory available… Even better, use multiple buffers, so you can fill
one buffer while another one is being used. If each channel has 3 or more
buffers, you can have one that is currently used, one “in queue” and one
being filled/emptied. This isn’t TRIVIAL, but it’s not rocket science
either. All graphics cards, ethernet, USB, IEEE-1394, (modern)IDE and SCSI
controlers already have some form of implementation of this method, with
slight variations depending on exactly what is required for that version of
card.
Doing 3200KB/s should be absolutely no problem. A PCI bus handles something
like 20-30 MB/s, without taking too much of a drastic measuer…
–
Mats
Hakim
“Peter Viscarola (OSR)” wrote in message
> news:xxxxx@ntdev…
> > Maxim S. Shatskih wrote:
> >>
> >> BTW - DPCs are not fired exactly on IRQL drop to < DISPATCH - this is
> >> true only
> >> on an idle CPU. Otherwise, they are sometimes delayed till the next
timer
> >> tick,
> >> and the kernel parameter IdealDpcRate in the registry governs this.
Try
> >> playing
> >> with it.
> >>
> >
> > It’s even more complicated than that, right? Consider the issue of
> > whether or not the DPC is to a local or remote processor, the DPC
> > Importance, etc, etc. It’d take a long article to describe how DPCs
> > really work.
> >
> > Hmmmm… That’s a good idea for an article in The NT Insider. Assuming
we
> > can get somebody from Microsoft to agree the information isn’t
> > confidential
> >
> > Now, having said all that, it’s been my experience that all the
esoteric
> > DPC tuning parameters are FAR overshadowed by the behavior of the
drivers
> > in the system. One misbehaving driver can screw-up ANY assumptions
that
> > you want to make.
> >
> > Remember, the problem you’re trying to quantify here isn’t average
ISR-DPC
> > latency, it’s the WORST CASE ISR-DPC latency. As Don Burn said, there
are
> > drivers that spend 1000ms in thier DPCs. I’ve personally written a
driver
> > that spent 300ms in its DPCs (don’t try this at home, but this was a
> > special case… the device in question was being benchmarked by a
> > publication – The actual SYSTEM it ran on didn’t have to be useable).
> >
> > I’d suggest that unless you put some specific bounds on the problem,
> > there’s no reasonable answer that’ll be correct. Mark Roddy’s
suggestions
> > are a good starting point for specifying these bounds. If you can make
> > some reasonable assumptions, allowing some number of tens of
milliseconds
> > will typically make your device happy.
> >
> > Of course, you also must consider the consequence of a failure, right?
If
> > you’ve gotta be right or the nuclear reactor melts down, you’re
probably
> > going to be more conservative in your specifications.
> >
> > Peter
> > OSR
> >
>
>
>
> —
> Questions? First check the Kernel Driver FAQ at http://www.
> osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@3dlabs.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
> ForwardSourceID:NT0000FC9E