> My rule is make it two level deep at most. If for any wierd reasons that
it
has to go 3~4 depth, it MUST fit in one screen. I guess I’m not a very
contextual person.
Calvin
Note to readers: much of this is written for the OP, who is a student. So
don’t complain that I’m covering the obvious. Experienced driver writers
can save time by skipping to the next message…now!
Very few people are good with dealing with nesting. There are three
things that cognitive psychology research has poven humans are not very
good at: managing large amounts of state; handling nested contexts; and
reasoning about concurrency. What do programmers do? Yep. We have
trained ourselves to deal with all three things that cognitive psychology
tells us are hard problems!
Mostly, we handle it by avoidance. Can’t reason well about large amounts
of state? Put the required state in a struct and manage it with a small
number of functions. This leads to languages like C++ where the compiler
rigorously enforces the “rules in our head”. Can’t manage dynamic
nesting? Put the necessary state in a struct and pass it by pointer.
This leads to avoiding global variables and leads to languages like C++.
Can’t handle static nesting? Don’t nest. For example, I do a lot of
“state management” by using decision tables. So my comments might be
something like
Name A B C Action
a. T T T paint bits purple
b. T T F paint bits green
…
Now, a newbie might be tempted to write this as:
if(A)
if(B)
if(C)
purple();
else
green();
(note the lack of {} which can also lead to problems)
For the eight cases, the process will be “optimized” in a set of nested
statements.
The problem is that the code will be maintained by unskilled labor. This
means either the new hire or the original author six months later. It
involves taking the complex nested contexts and reverse-engineering that
decision table, which may or may not be done correctly. Then, a new set
of consraints may require massive refactoring of the code.
So instead, I write
if(A && B && C)
{ /* a. */
purple();
} /* a. */
else
if(A && B && !C)
{ /* b. */
green();
} /* b. */
else
…
if(!A && B && C)
{ /* e. */
…report impossible condition
} /* e. */
…
That is, I make no attempt to “optimize” the decoding. And the code to
take an action is typically a single function call, so I don’t have to
copy-and-paste code if two states require the same response; I just call
the function again. Note that although I show () in the call, in C I’m
usually passing down some kind of context.
Now, an amateur instructor might point out that my code is larger than
required, and “less efficient” because it involves evaluating A, B, and C
multiple times (they might not be simple variables, but more like “p !=
NULL” or something like that. I have several responses to that, which
include, but are not limited to:
function calls are free
optimizing compilers can detect common subexpressions and collapse
similar code sequences
evaluating simple predicates on a modern architecture with caches,
pipelines, and all the other bells & whistles is essentially free
code size is irrelevant unless you can demonstrate that the
“optimization” affects the entire driver size to a significant extent
delivering a buggy product is extremely expensive
maintenance (including feature enhancement) is somewhere between
expensive and extremely expensive
introducing bugs via maintenance which are not caught until
post-deployment is extremely expensive
finding and fixing pre-deployment bugs is expensive
So “optimizing” code “by hand” often wastes a valuable resource
(programmer time), wastes an expensive resource (programmer time), and
often pushes these costs “downstream” to later debugging and enhancement
costs. Such pre-optimizations can significantly add to the development
time, which can affect time-to-market and consequently
return-on-investment or even market share.
Optimizing code in this fashion typically saves a small percentage of time
and, in the big picture of overall driver size, unmeasurably small amounts
of space, but has high costs where it critically matters: development
costs and post-deployment maintenance/enhancement costs. Not worth it.
Finally, concurrency. Even most experts suck at reasoning about
concurrency. The simplest solution is to avoid it entirely when possible.
For example, if two threads act upon disjoint data, there is no need to
reason about their concurrency. Synchronization is where two threads rub,
and like most mechanical systems, all this does is generate heat and waste
energy. It is easier to avoid concurrency in user space; down in the
kernel, it is an always-in-your-face reality. So great discipline is
involved in dealing with concurrency, and the simpler you can keep it the
happier you will be. The
acquire-spinlock-make-a-small-number-of-changes-release-spinlock model
generally works pretty well, but if the spinlock becomes high-conflict in
a multicore environment, queued spinlock may be a better choice. The rule
“lock the smallest amount of data for the shortest possible time” is a
good rule to use as much as possible, along with “never set more than one
lock at a time” as a good rule for deadlock avoidance. The rule “when
using more than one lock at a time, lock them in a canonical order” is
mandatory, but often hard to do and very, very difficult to support in
maintenance. Avoidance is the best strategy here. Don’t do a design that
requires nested locking (which is not always possible when in the kernel).
Also note that in general, single scalar variables are handled atomically
by the hardware. Thus the following code is just silly:
BOOLEAN busy; // in a THING struct, for example, and guaranteed to be
DWORD-aligned
KeAcquireSpinlock(&thing->busylock, &oldirql);
thing->busy = FALSE;
KeReleaseSpinlock(&thing->busylock, odirql);
Also, “monotonically mutable” variables, those that start out in one state
and can be set to only one other state, and once changed are not set back
to the initial state, require no synchronization. A ref-count, for
example, is not monotonically-mutable because it can take on many states.
A BOOLEAN can be monotonically mutable, or you may rely on hardware
atomicity. In fact, the real rule is that if for any set of values it is
impossible to violate the invariants of that set no matter how many
non-atomic actions you take, and in what order they are performed, then no
synchronization is required. The number of times this is actually true is
small enough to be indistinguishable from zero. (There is an unproven
conjecture that if you believe you have such a set of values, you are
mistaken).
joe
>
> As a style issue, my own taste is to never nest conditionals more than
> one
> deep, which is often impractical, so I am willing to go to two levels.
>
NTDEV is sponsored by OSR
OSR is HIRING!! See http://www.osr.com/careers
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer