This is what I would call a shortcut for error handling. The long hand
would be something like
#ifdef _DEBUG_OR_SOME_OTHER_DEF
if(condition)
{
RaiseErrorInSomeWayThatMakesSenseForThisEnvironment(message)
}
#endif
or
if(condition)
{
#ifdef _DEBUG_OR_SOME_OTHER_DEF
RaiseErrorInSomeWayThatMakesSenseForThisEnvironment(message)
#else
GracefulFailureOfSomeKind();
#endif
}
And as you say are ideal for verifying assumptions during testing. They are
not an error handling method and if used as such can be deadly; especially
in environments where exceptions are deadly.
In my opinion, error handling is one of the most difficult aspects of
programming for learners and an area where there is wide disagreement on
what constitutes good practice. Personally, for C code, I prefer the goto
‘ladder’ approach for control flow thru functions with multiple failure
points as it provides a clear egress path with no duplication of code (see
example below) but others will prefer nested if statements, stack frames or
one of several other styles. The important point is that, as Joe says, they
can fail gracefully with rollback after detecting a problem. What I object
to is the absolute terms in which he advocates this without considering what
should be done in a case like the second example
bool InitSomeStruct(SOME_STRUCT* pSS)
{
pSS->pBuffer1 = AllocateSomeBuffer();
if(pSS->pBuffer1 == NULL)
{
goto abort1;
}
pSS->pBuffer2 = AllocateSomeBuffer();
if(pSS->pBuffer2 == NULL)
{
goto abort2;
}
return true;
//abort3:
// FreeSomeBuffer(pSS->pBuffer2);
abort2:
FreeSomeBuffer(pSS->pBuffer1);
abort1:
return false;
}
void ProcessSomeRequest(REQUEST_DESCRIPTOR* pRequest)
{
// a request has been received from a user and is described by pRequest
if(!ImpersonateRequestUser(pRequest))
{
// send back some error
return;
}
TRY
{
DoSomethingToProcessTheRequest(pRequest);
}
FINALLY
{
if(!RevertToSelf())
{
// Now what? This call should never fail, but since it has, we
can conclude that either:
// 1) there is memory corruption
// 2) some code inside DoSomethingToProcessTheRequest has
allowed a hacker to call RevertToSelf already
// 3) the host OS is broken in some other way
// it is not safe to continue executing in this unknown security
context because we
// can’t know what the thread will do next, and it is not safe
in general to simply exit the thread
// because we know that the memory space has been compromised in
some way
// In a UM, attempt ExitProcess; in KM KeBugCheck; in an
embedded system, raise whatever panic signal
// that the environment defines and let the hardware reboot.
This is an unrecoverable error and an
// appropriate time to ABEND
}
}
}
“Scott Noone” wrote in message news:xxxxx@ntdev…
“m” wrote in message news:xxxxx@ntdev…
While I agree that ASSERT is at best a shortcut for proper error handing
code and at worst something quite deadly…
Sorry, don’t agree with this at all (nor do I agree with Max’s, er,
assertion, that, “ASSERT is evil”). For me, ASSERTs are great provided that
they’re used as they are intended to be used. They can make the assumed
environment of a particular routine explicit to future maintainers of the
code (including the original author), which makes things that much more
future proof.
For example, say I have an I/O event processing callback that does some
validation of the incoming buffer then calls a helper routine:
{
if (BufferLen < MIN_BUFFER_LEN) {
// fail request
return;
}
DoStuff(Buffer, BufferLen);
}
Then, in DoStuff I ASSERT that the buffer passed in meets the minimum
requirements:
DoStuff
{
ASSERT(BufferLenParam >= MIN_BUFFER_LEN);
}
This helps me in two ways. First, if DoStuff grows to DoMoreStuff and gets
called from multiple places, it’s clear that the code was originally written
with a restriction on the incoming size buffer. Second, if I’m doing a code
review I get some quick insight into the runtime environment of this
function without having to track down every reference to it (which brings up
the issue of incorrect ASSERTs, but that’s a different problem).
Or how about a helper function written with the assumption that a lock is
held when it’s called? Why would ASSERTing that the appropriate lock is held
be evil? Or ASSERTing the IRQL restrictions on a particular routine?
Admittedly, more and more of this can be done with the SAL notations, though
I find ASSERTs to be clear, easy, and useful. Of course if you have someone
using ASSERTs as their only method of error handling then you’re doomed,
though that should be dealt with through your coding guidelines and not pass
any reasonable code review.
-scott
–
Scott Noone
Consulting Associate and Chief System Problem Analyst
OSR Open Systems Resources, Inc.
http://www.osronline.com
“m” wrote in message news:xxxxx@ntdev…
While I agree that ASSERT is at best a shortcut for proper error handing
code and at worst something quite deadly, the statement
‘The only possible way a program is allowed to exit is if the user requests
it to terminate; no mechanism that terminates execution is permissible.’
I can’t quite agree with. Certainly, this is quite correct when building a
single threaded DOS application, but many systems and programming paradigms
have the concept of panic or master alarm and in some cases ABEND is the
only sane course of action. We see posts from many who want to continue
after memory corruption, or unhandled KM exceptions and we try our best to
dissuade them because their task is impossible. Sometimes, the same is true
in a UM app too and there is no possible way to continue safely. Failure of
RevertToSelf or HeapUnlock is one example, and another would be corrupted
state in some multi-threaded designs. You could argue that designs and APIs
that can result in unrecoverable failures ought not be used, and certainly
to minimize the number of unrecoverable failure paths in a design is a
worthwhile objective, but some activities simply require designs that can
have unrecoverable failures.
wrote in message news:xxxxx@ntdev…
I just checked the documentation. In kernel mode, if a debugger is
attached, a breakpoint is taken, but there is no suggestion that any
exception is taken.
I consider calls like this which terminate execution to be designed by
irresponsible children. ASSERT is used only during development and is
never part of a deliverable product. And the presumption that my program
can terminate execution safely at random points in time has no foundation;
in a well-designed world, control returns to me and I either continue and
get a termination or continue with recovery. If I code continue with
recovery, an ASSERT macro that terminates execution is a complete
disaster.
There are few things more amateurish than seeing code of the form
ASSERT(p != NULL);
int n = *p;
I always write
ASSERT(p != NULL);
if(p == NULL)
recover
“recover” might be
return FALSE;
throw new CInternalError(FILE, LINE);
as typical examples. The exception is caught, the error is logged, the
transaction is aborted, modified state is rendered consistent (rollback)
and the program continues to run. The only possible way a program is
allowed to exit is if the user requests it to terminate; no mechanism that
terminates execution is permissible.
I remember the amateurish code of cdb, the Berkley Unix debugger. At the
slightest error, it did exit(1); So I’m an hour into the debug session,
I’ve finally seen the conditions that trigger the bug, I’m trying to
figure out how the values got that way, I ask for a stack backtrace.
Boom! I’m looking at a nearly blank screen which is showing the shell
prompt. The debugger exited, terminating my debug session!
We had our local expert work on fixing this. He fixed over 200 bugs that
led to these conditions, and turned the exit() calls into longjmps (OK,
this was C in 1982) so the debugger would not exit. The next day we
received a new cdb distribution tape (FTP? Tapes were faster!) which
claimed to have fixed over 200 bugs. So our programmer, in great dismay
that he’d wasted a week, diffed the sources. He announced the next day
that the overlap (intersection) of the bug fixes was [drum roll] 3! And
they still exited the debugger if anything seemed wrong.
I remember arguing with one programmer about putting a BugCheckEx call in
a driver. He thought that if the user app sent down a bad IOCTL code,
this was a valid response. So I asked him, “Suppose you have a guest in
your house. He discovers that there is an insufficiency of toilet paper.
What do YOU think the correct recovery should be: (a) look for a new roll
on the back of the toilet (b) burn down your house?” I pointed out to him
that his driver was a guest in the OS, and since, in fact, it was not a
file system driver, any errors represented (a) hardware failures, in which
case he should recover gracefully (b) driver coding errors, in which case
he should recover gracefully or (c) user errors, in which case he should
recover gracefully. The only time you are permitted to burn the house
down is if you find your host is a mad scientist who will, very shortly,
release a highly-contagious variant of pneumonic plague on the world, that
he created in his basement lab. Or your file system driver detects some
impossible state which can only mean that further attempts to use it would
cause even more damage. But crashing the system because the app sent a
bad IOCTL to a DAC device was not an appropriate response.
He appeared to be unconvinced.
joe
xxxxx@flounder.com wrote:
> Asserts do not stop execution, even if they fail; they only print out
messages. And that only in debug builds.
I would have argued the reverse. Asserts (in both user and kernel code)
trigger a debug breakpoint. If a debugger is attached, execution is
stopped because the debugger fires up. If a debugger is not attached,
execution is stopped by an uncaught exception (in user mode) or a
bugcheck (in kernel mode).
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer