A basic question regarding Iocompletion

> Scott, I absolutlely agree with you. How many times has someone called a

function with a NULL pointer, even though NULL is not a valid value?

For me - never.

Before passing NULL pointer, I will check the docs/source code to investigate whether it is valid.

By default, pointers cannot be NULL. Just plain and simple.

More so - ASSERT is only for checked build. Lots of people nearly never use checked builds :slight_smile:

ASSERTs really do help the documentation process

Comment before the function declaration is by far more helpful. The use of IN, OUT, OPTIONAL keywords is also a good idea.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

Scott Noon wrote:

For me, ASSERTs are great provided that they’re used as they are
intended to be used. They can make the assumed environment of a
particular routine explicit to future maintainers of the code
(including the original author), which makes things that much more
future proof.

I completely agree and I will go a little further. An ASSERT does two
things:

  1. As Scott stated, an ASSERT helps document (in code) invariants
    for a function (or macro).

  2. In addition, an ASSERT can prevent further execution (by exiting or
    bugchecking) when the invariant is violated. This is important
    because, if the invariant is violated, you may have no guarantees
    how any further code execution will behave (both inside and after
    you exit the function in question). Thus exiting/bugchecking in
    an ASSERT macro can be very important to prevent data loss/corruption.

You can also have a couple of different ASSERT macros so that you can
control whether to exit/bugcheck depending on how bad the invariant
violation is. For example, you might have an ASSERT that only
exits/bugchecks on a debug/checked build and a RETAIL_ASSERT that always
exits/bugchecks because further execution could lead to data loss/corruption
and you want to prevent that even on a retail build (in case some made a
mistake).

Now, this all being said, ASSERTs should never be used to do validation of
inputs that are not invariant checks. For instance, if I have a top-level
API that I expose externally, I should return STATUS_INVALID_PARAMETER (or
equivalent in another error space) on bad input. However, if I have an
internal function where I am in complete control of how it gets called, an
ASSERT on bad internal input is appropriate as it would be a bug in *my*
code.

IIRC, this is generally how I code ASSERT/RETAIL_ASSERT macros:

#if … // debug or checked build
#define ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))
#elif … // retail build w/logging for failed asserts
#define ASSERT(Expression) \
((Expression) ? 1 : \
LogAssertFailed(#Expression, FUNCTION, FILE, LINE))
#else // this a retail build where I do not want any overhead
#define ASSERT(Expression) 1
#endif

// RETAIL_ASSERT is always on.
#define RETAIL_ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))

where:
AssertFailed logs the failure, stops execution, and returns 0
LogAssertFailed logs the failure and returns 0

Then you can do stuff like:

if (!ASSERT(Expression))
{
// We use the ASSERT macro here because this invariant violation
// is not too bad (will not corrupt data) and we can return
// a reasonable error from this function (due to how this
// function interface is specified). In a retail build w/o
// ASSERT logging, the ASSERT macro is a no-op and this code
// block is never executed. The code block can only be executed
// in debug/checked builds or in retail builds with ASSERT
// logging enabled.

}

  • Danilo

How does one write code to gracefully fail in a know corrupt environment?
Raising an exception in the second case is a possible security hole if
nothing worse; if there is no other information about what in the
environment might be still good, then I fail to see how graceful recovery is
either desirable or correct. But that may be due to my limited intelligence
and I wait with baited breath to hear the oracle’s pronunciation

PS I was foolish enough at one time to think that this was a forum for
discussion based on technical merit, but now I have learned better

wrote in message news:xxxxx@ntdev…

Not quite. The correct solution in the second case is to do the graceful
failure, no matter what, and then, if there is an error, raise the
condition. C exceptions are not as elegant or friendly as C++ exceptions,
but they are not an unreasonable way to handle this situation, providing
you make sure that all callers have try/except frames that will get laid
down (it might be the caller of the caller of your caller, but each stage
that has to rollback state needs one).

I did a kernel component in 1975 that went from an MTBF of 45 minutes
(doing the equivalent of BugCheckEx) to an MTBF which was indefinite (we
ran for six solid weeks, 24/7, zero downtime, until we had a campus-wide
power failure). About once a day, the exception condition for total
failure was raised, and the recovery code came into play. The recovery
code was almost as complex as the kernel component itself.
joe

This is what I would call a shortcut for error handling. The long hand
would be something like

#ifdef _DEBUG_OR_SOME_OTHER_DEF
if(condition)
{
RaiseErrorInSomeWayThatMakesSenseForThisEnvironment(message)
}
#endif

or
if(condition)
{
#ifdef _DEBUG_OR_SOME_OTHER_DEF
RaiseErrorInSomeWayThatMakesSenseForThisEnvironment(message)
#else
GracefulFailureOfSomeKind();
#endif
}

And as you say are ideal for verifying assumptions during testing. They
are
not an error handling method and if used as such can be deadly; especially
in environments where exceptions are deadly.

In my opinion, error handling is one of the most difficult aspects of
programming for learners and an area where there is wide disagreement on
what constitutes good practice. Personally, for C code, I prefer the goto
‘ladder’ approach for control flow thru functions with multiple failure
points as it provides a clear egress path with no duplication of code (see
example below) but others will prefer nested if statements, stack frames
or
one of several other styles. The important point is that, as Joe says,
they
can fail gracefully with rollback after detecting a problem. What I
object
to is the absolute terms in which he advocates this without considering
what
should be done in a case like the second example

bool InitSomeStruct(SOME_STRUCT* pSS)
{
pSS->pBuffer1 = AllocateSomeBuffer();
if(pSS->pBuffer1 == NULL)
{
goto abort1;
}

pSS->pBuffer2 = AllocateSomeBuffer();
if(pSS->pBuffer2 == NULL)
{
goto abort2;
}
return true;

//abort3:
// FreeSomeBuffer(pSS->pBuffer2);
abort2:
FreeSomeBuffer(pSS->pBuffer1);
abort1:
return false;
}

void ProcessSomeRequest(REQUEST_DESCRIPTOR* pRequest)
{
// a request has been received from a user and is described by
pRequest
if(!ImpersonateRequestUser(pRequest))
{
// send back some error
return;
}
TRY
{
DoSomethingToProcessTheRequest(pRequest);
}
FINALLY
{
if(!RevertToSelf())
{
// Now what? This call should never fail, but since it has,
we
can conclude that either:
// 1) there is memory corruption
// 2) some code inside DoSomethingToProcessTheRequest has
allowed a hacker to call RevertToSelf already
// 3) the host OS is broken in some other way
// it is not safe to continue executing in this unknown
security
context because we
// can’t know what the thread will do next, and it is not safe
in general to simply exit the thread
// because we know that the memory space has been compromised
in
some way
// In a UM, attempt ExitProcess; in KM KeBugCheck; in an
embedded system, raise whatever panic signal
// that the environment defines and let the hardware reboot.
This is an unrecoverable error and an
// appropriate time to ABEND
}
}
}

“Scott Noone” wrote in message news:xxxxx@ntdev…

“m” wrote in message news:xxxxx@ntdev…
>While I agree that ASSERT is at best a shortcut for proper error handing
>code and at worst something quite deadly…

Sorry, don’t agree with this at all (nor do I agree with Max’s, er,
assertion, that, “ASSERT is evil”). For me, ASSERTs are great provided
that
they’re used as they are intended to be used. They can make the assumed
environment of a particular routine explicit to future maintainers of the
code (including the original author), which makes things that much more
future proof.

For example, say I have an I/O event processing callback that does some
validation of the incoming buffer then calls a helper routine:

{
if (BufferLen < MIN_BUFFER_LEN) {
// fail request
return;
}

DoStuff(Buffer, BufferLen);
}

Then, in DoStuff I ASSERT that the buffer passed in meets the minimum
requirements:

DoStuff
{
ASSERT(BufferLenParam >= MIN_BUFFER_LEN);
}

This helps me in two ways. First, if DoStuff grows to DoMoreStuff and gets
called from multiple places, it’s clear that the code was originally
written
with a restriction on the incoming size buffer. Second, if I’m doing a
code
review I get some quick insight into the runtime environment of this
function without having to track down every reference to it (which brings
up
the issue of incorrect ASSERTs, but that’s a different problem).

Or how about a helper function written with the assumption that a lock is
held when it’s called? Why would ASSERTing that the appropriate lock is
held
be evil? Or ASSERTing the IRQL restrictions on a particular routine?

Admittedly, more and more of this can be done with the SAL notations,
though
I find ASSERTs to be clear, easy, and useful. Of course if you have
someone
using ASSERTs as their only method of error handling then you’re doomed,
though that should be dealt with through your coding guidelines and not
pass
any reasonable code review.

-scott


Scott Noone
Consulting Associate and Chief System Problem Analyst
OSR Open Systems Resources, Inc.
http://www.osronline.com

“m” wrote in message news:xxxxx@ntdev…

While I agree that ASSERT is at best a shortcut for proper error handing
code and at worst something quite deadly, the statement

‘The only possible way a program is allowed to exit is if the user
requests
it to terminate; no mechanism that terminates execution is permissible.’

I can’t quite agree with. Certainly, this is quite correct when building
a
single threaded DOS application, but many systems and programming
paradigms
have the concept of panic or master alarm and in some cases ABEND is the
only sane course of action. We see posts from many who want to continue
after memory corruption, or unhandled KM exceptions and we try our best to
dissuade them because their task is impossible. Sometimes, the same is
true
in a UM app too and there is no possible way to continue safely. Failure
of
RevertToSelf or HeapUnlock is one example, and another would be corrupted
state in some multi-threaded designs. You could argue that designs and
APIs
that can result in unrecoverable failures ought not be used, and certainly
to minimize the number of unrecoverable failure paths in a design is a
worthwhile objective, but some activities simply require designs that can
have unrecoverable failures.

wrote in message news:xxxxx@ntdev…

I just checked the documentation. In kernel mode, if a debugger is
attached, a breakpoint is taken, but there is no suggestion that any
exception is taken.

I consider calls like this which terminate execution to be designed by
irresponsible children. ASSERT is used only during development and is
never part of a deliverable product. And the presumption that my program
can terminate execution safely at random points in time has no foundation;
in a well-designed world, control returns to me and I either continue and
get a termination or continue with recovery. If I code continue with
recovery, an ASSERT macro that terminates execution is a complete
disaster.

There are few things more amateurish than seeing code of the form

ASSERT(p != NULL);
int n = *p;

I always write

ASSERT(p != NULL);
if(p == NULL)
recover

“recover” might be
return FALSE;
throw new CInternalError(FILE, LINE);

as typical examples. The exception is caught, the error is logged, the
transaction is aborted, modified state is rendered consistent (rollback)
and the program continues to run. The only possible way a program is
allowed to exit is if the user requests it to terminate; no mechanism that
terminates execution is permissible.

I remember the amateurish code of cdb, the Berkley Unix debugger. At the
slightest error, it did exit(1); So I’m an hour into the debug session,
I’ve finally seen the conditions that trigger the bug, I’m trying to
figure out how the values got that way, I ask for a stack backtrace.
Boom! I’m looking at a nearly blank screen which is showing the shell
prompt. The debugger exited, terminating my debug session!

We had our local expert work on fixing this. He fixed over 200 bugs that
led to these conditions, and turned the exit() calls into longjmps (OK,
this was C in 1982) so the debugger would not exit. The next day we
received a new cdb distribution tape (FTP? Tapes were faster!) which
claimed to have fixed over 200 bugs. So our programmer, in great dismay
that he’d wasted a week, diffed the sources. He announced the next day
that the overlap (intersection) of the bug fixes was [drum roll] 3! And
they still exited the debugger if anything seemed wrong.

I remember arguing with one programmer about putting a BugCheckEx call in
a driver. He thought that if the user app sent down a bad IOCTL code,
this was a valid response. So I asked him, “Suppose you have a guest in
your house. He discovers that there is an insufficiency of toilet paper.
What do YOU think the correct recovery should be: (a) look for a new roll
on the back of the toilet (b) burn down your house?” I pointed out to him
that his driver was a guest in the OS, and since, in fact, it was not a
file system driver, any errors represented (a) hardware failures, in which
case he should recover gracefully (b) driver coding errors, in which case
he should recover gracefully or (c) user errors, in which case he should
recover gracefully. The only time you are permitted to burn the house
down is if you find your host is a mad scientist who will, very shortly,
release a highly-contagious variant of pneumonic plague on the world, that
he created in his basement lab. Or your file system driver detects some
impossible state which can only mean that further attempts to use it would
cause even more damage. But crashing the system because the app sent a
bad IOCTL to a DAC device was not an appropriate response.

He appeared to be unconvinced.
joe

> xxxxx@flounder.com wrote:
>> Asserts do not stop execution, even if they fail; they only print out
messages. And that only in debug builds.
>
> I would have argued the reverse. Asserts (in both user and kernel code)
trigger a debug breakpoint. If a debugger is attached, execution is
stopped because the debugger fires up. If a debugger is not attached,
execution is stopped by an uncaught exception (in user mode) or a
bugcheck (in kernel mode).
>
> –
> Tim Roberts, xxxxx@probo.com
> Providenza & Boekelheide, Inc.
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

The “blinders” approach is unacceptable. This is the belief that because
YOUR code is confused, ALL code is confused, and therefore it is
legitimate to exit the application. I have found this to be, without
exception, a colossal blunder. So the database index is screwed up. Stop
operations on that database. But the realtime data acquisition thread is
still happy, and if you exit the app, you lose live data. So it makes
ZERO sense to terminate the application. Even if attempts to use the
database could corrupt the actual database, then stop using the database.
But the realtime data collection thread, which is NOT using the database,
is supercritical, and if someone writing the database library thinks that
exiting is correct, they are so completely out of touch with reality that
they might as well be on drugs.

And talking about the difference between external APIs and internal
functions is meaningless. If my API calls a complex set of internal
functions, any failure, for whatever reason, must reflect out to the API.
Even if the error code is “fatal data consistency error”. But existing is
NOT an option. I’ve fought these amateur libraries for years, and I have
yet to find a place where simply reflecting the error condition back to
the caller would have been the wrong decision.

So, if I were working in C, I’d implement FailedAssertion to include
RaiseException, or in C++ I’d use a throw. Note that it becomes the
responsibility of intermediate layers to guarantee that in the presence of
exceptions that their invariants are preserved, making it hard to
mix-and-match C and C++ universes.

But having spent years fighting these poor decisions, and an entire year
implementing an exception-based completely bulletproof software component,
I simply cannot imagine any situation in which termination makes sense for
an app. OS inconsistencies which are indicative of OS data structure
corruption is a different situation, but I’d rather have “file system is
wonky” come back to my app as “unrecoverable internal error”, because the
file system in which I am logging realtime data might not be the same file
system, or device, on which the user is doing something else, such as data
mining of previuosly-existing data.
joe

Scott Noon wrote:

> For me, ASSERTs are great provided that they’re used as they are
> intended to be used. They can make the assumed environment of a
> particular routine explicit to future maintainers of the code
> (including the original author), which makes things that much more
> future proof.

I completely agree and I will go a little further. An ASSERT does two
things:

  1. As Scott stated, an ASSERT helps document (in code) invariants
    for a function (or macro).

  2. In addition, an ASSERT can prevent further execution (by exiting or
    bugchecking) when the invariant is violated. This is important
    because, if the invariant is violated, you may have no guarantees
    how any further code execution will behave (both inside and after
    you exit the function in question). Thus exiting/bugchecking in
    an ASSERT macro can be very important to prevent data loss/corruption.

You can also have a couple of different ASSERT macros so that you can
control whether to exit/bugcheck depending on how bad the invariant
violation is. For example, you might have an ASSERT that only
exits/bugchecks on a debug/checked build and a RETAIL_ASSERT that always
exits/bugchecks because further execution could lead to data
loss/corruption
and you want to prevent that even on a retail build (in case some made a
mistake).

Now, this all being said, ASSERTs should never be used to do validation of
inputs that are not invariant checks. For instance, if I have a top-level
API that I expose externally, I should return STATUS_INVALID_PARAMETER (or
equivalent in another error space) on bad input. However, if I have an
internal function where I am in complete control of how it gets called, an
ASSERT on bad internal input is appropriate as it would be a bug in *my*
code.

IIRC, this is generally how I code ASSERT/RETAIL_ASSERT macros:

#if … // debug or checked build
#define ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))
#elif … // retail build w/logging for failed asserts
#define ASSERT(Expression) \
((Expression) ? 1 : \
LogAssertFailed(#Expression, FUNCTION, FILE, LINE))
#else // this a retail build where I do not want any overhead
#define ASSERT(Expression) 1
#endif

// RETAIL_ASSERT is always on.
#define RETAIL_ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))

where:
AssertFailed logs the failure, stops execution, and returns 0
LogAssertFailed logs the failure and returns 0

Then you can do stuff like:

if (!ASSERT(Expression))
{
// We use the ASSERT macro here because this invariant violation
// is not too bad (will not corrupt data) and we can return
// a reasonable error from this function (due to how this
// function interface is specified). In a retail build w/o
// ASSERT logging, the ASSERT macro is a no-op and this code
// block is never executed. The code block can only be executed
// in debug/checked builds or in retail builds with ASSERT
// logging enabled.

}

  • Danilo

NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> So, if I were working in C, I’d implement FailedAssertion to include

RaiseException, or in C++ I’d use a throw. Note that it becomes the
responsibility of intermediate layers to guarantee that in the presence of
exceptions that their invariants are preserved, making it hard to
mix-and-match C and C++ universes.

The solution is trivial - do not use C++ throw (unless you have full RAII) and especially RaiseException.

MS’s C SEH is just another way of nesting “goto” operators. Nothing else. They do not improve anything, just replacing one ugly coding style with another :slight_smile:

With error code returns, you have a danger that, if you will overlook the error return, then the code will continue execution with invariants violated. A crash.

With SEH, you have a danger that, if you will overlook the handling code, then the code will continue execution after throw with invariants violated (since no proper cleanup is done). A crash.

I simply cannot imagine any situation in which termination makes sense for
an app.

Command line batch-style app of “do the operation and exit” kind.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

On 08-Aug-2012 00:42, Danilo Almeida wrote:

> Then you can do stuff like:
>
> if (!ASSERT(Expression))
> {
> // We use the ASSERT macro here because this invariant violation
> // is not too bad (will not corrupt data) and we can return
> // a reasonable error from this function (due to how this
> // function interface is specified). In a retail build w/o
> // ASSERT logging, the ASSERT macro is a no-op and this code
> // block is never executed. The code block can only be executed
> // in debug/checked builds or in retail builds with ASSERT
> // logging enabled.
> …
> }

Please be careful with this. Once I’ve been almost fired on place for
showing an assert macro like this at a DR.
My opponent said that “assert” has a well known semantic, it should not
return any value or have side effects.
In short, this is another religious thing better be avoided.
Clever debug macros like these should have clear names containing
“DEBUG” or “DBG” etc.
In several places you can see a macro named VERIFY
that stays in retail build and also can be used with if’s:

if ( !VERIFY(cond) ) {
recover;
return false;
}

Regards,
- pa

On 07-Aug-2012 11:43, Maxim S. Shatskih wrote:

The use of IN, OUT, OPTIONAL keywords is also a good idea.

Really? MS does not seem to use these macros as wrappers for real SAL
annotations understood by the analyzer.
Unfortunately, SAL annotations are very ugly to the degree of rendering
code unreadable.
But shortest ones may be fine, such as __in, __out, __out_opt.

Macro OPTIONAL seems to be for optional C++ parameters (please correct
me if not) like this:

func(IN int p1, OUT char *p2, OUT char *p3 OPTIONAL)

where OPTIONAL *could be* defined as “=NULL” in C++ or nothing for plain
C - but actually defined as nothing always.

Regards,
– pa

On 08-Aug-2012 10:07, Maxim S. Shatskih wrote:

> MS’s C SEH is just another way of nesting “goto” operators. Nothing else. They do not improve anything, just replacing one ugly coding style with another :slight_smile:

It is understandable that standard c++ libraries cannot rely on a
proprietary MS technology on non-Windows platforms.
Thus native c++ exceptions must use a technique independent from SEH and
Microsoft compilers that support it (even on Windows).
Modern MSC++ compilers even forbid mixing SEH and C++ exceptions in one
function.

So, does anyone know a good research, article, blog post that explains
in sufficient depth how code written for C++ exceptions can be combined
with code that uses SEH, and what are recommended usage patterns?

– pa

In my experience, the recommendation would be don’t mix exception schemes in
a single module. That way, each module looks after itself and the potential
for bad interactions is minimized. MSDN has a fair description of the
exception handling changes for x64, which implies the kind of information
that you want, but doesn’t spell it out. I don’t know of anywhere else
specifically

“Pavel A” wrote in message news:xxxxx@ntdev…

On 08-Aug-2012 10:07, Maxim S. Shatskih wrote:

> MS’s C SEH is just another way of nesting “goto” operators. Nothing else.
> They do not improve anything, just replacing one ugly coding style with
> another :slight_smile:

It is understandable that standard c++ libraries cannot rely on a
proprietary MS technology on non-Windows platforms.
Thus native c++ exceptions must use a technique independent from SEH and
Microsoft compilers that support it (even on Windows).
Modern MSC++ compilers even forbid mixing SEH and C++ exceptions in one
function.

So, does anyone know a good research, article, blog post that explains
in sufficient depth how code written for C++ exceptions can be combined
with code that uses SEH, and what are recommended usage patterns?

– pa

I am not sure which post this is exactly in response to, but I continue to
object to your use of absolute terms here.

Did you read my example? It is not a situation where a specific function,
library, module or has failed because of a data inconsistency, but a
situation where operating environment of the whole program has been
corrupted. Depending on how bad the damage is, it may not be possible to
detect this at all, but if it is detected, what should a responsible
programmer do? Surely the answer isn’t “let’s ignore the problem and
continue execution so we can corrupt yet more stuff”. The environment may be
so corrupted that even trying to terminate may fail and corrupt stuff (i.e.
execute garbage instead of the expected terminate code), but there isn’t
much that can be done about that except hope that the next fault will be
detected by a lower level that can terminate.

I am not suggesting that this should be the normal method of handling all
errors, indeed far from it, but I am challenging your absolute statement
that there is never a situation where termination is correct. I have given a
few examples of situations in Windows software where I believe that
termination is the correct response, and unless you can provide specific
alternatives for those situations, or explain how they are impossible in
correctly designed applications, I can’t see that an absolute prohibition is
reasonable. Again, some examples from UM programming on Windows

  • Failure of RevertToSelf
  • Failure of HeapUnlock on the process heap
  • Failure of HeapFree *
  • Stack overflow exception in exception handler

The only causes for RevertToSelf failure are memory corruption or mismatched
calls. In either case the thread may be executing as a security principle
other than the main one for the process, so any further execution, even
raising an exception or returning failure is a security risk

The only causes for HeapUnlock failure are memory corruption of mismatched
calls. The default process heap is used by many components in Windows
applications. Even when one’s code takes special care to avoid using the
default process heap, threads injected into the process by Windows or other
libraries still use that heap. If the application performs some memory
analysis, for the purpose of updating a performance counter let’s say, that
requires locking the process heaps and the call to HeapUnlock fails, then
the only courses for execution to take are deadlock or arbitrary memory
corruption because either the lock itself is hosed or the data it guards is.

HeapFree has an asterisks because this one can fail for more reasons and
often it doesn’t fail but internally calls LogHeapFailure ->
RtlReportCriticalFailure before any failure code would be executed

Stack overflow exceptions are hard to handle under the best of
circumstances. It is not too bad if one can assume that the exception will
only be raised from a function prologue, but _alloca, assembly functions and
optimized code can result in stack expansion at arbitrary points in the
instruction stream and writing an effective handler is only made possible by
the API SetThreadStackGuarantee, but if the size is miscalculated and a
stack overflow exception is raised while handling an exception, then the
thread is in a non-continueable state and it is impossible to execute any
unwind handlers or other failure code. And because the thread was executing
arbitrary code before the initial exception and within the exception
handler, it is impossible to determine if it is safe to simply spin or kill
this thread because there is no information on what locks may be held or
what state any data structures may be in
Again, the purpose of pointing out these abstruse cases is to make you
modify your statement from an absolute prohibition into the qualified ‘don’t
do that because it is a really bad idea except in rare and obscure cases’

wrote in message news:xxxxx@ntdev…

The “blinders” approach is unacceptable. This is the belief that because
YOUR code is confused, ALL code is confused, and therefore it is
legitimate to exit the application. I have found this to be, without
exception, a colossal blunder. So the database index is screwed up. Stop
operations on that database. But the realtime data acquisition thread is
still happy, and if you exit the app, you lose live data. So it makes
ZERO sense to terminate the application. Even if attempts to use the
database could corrupt the actual database, then stop using the database.
But the realtime data collection thread, which is NOT using the database,
is supercritical, and if someone writing the database library thinks that
exiting is correct, they are so completely out of touch with reality that
they might as well be on drugs.

And talking about the difference between external APIs and internal
functions is meaningless. If my API calls a complex set of internal
functions, any failure, for whatever reason, must reflect out to the API.
Even if the error code is “fatal data consistency error”. But existing is
NOT an option. I’ve fought these amateur libraries for years, and I have
yet to find a place where simply reflecting the error condition back to
the caller would have been the wrong decision.

So, if I were working in C, I’d implement FailedAssertion to include
RaiseException, or in C++ I’d use a throw. Note that it becomes the
responsibility of intermediate layers to guarantee that in the presence of
exceptions that their invariants are preserved, making it hard to
mix-and-match C and C++ universes.

But having spent years fighting these poor decisions, and an entire year
implementing an exception-based completely bulletproof software component,
I simply cannot imagine any situation in which termination makes sense for
an app. OS inconsistencies which are indicative of OS data structure
corruption is a different situation, but I’d rather have “file system is
wonky” come back to my app as “unrecoverable internal error”, because the
file system in which I am logging realtime data might not be the same file
system, or device, on which the user is doing something else, such as data
mining of previuosly-existing data.
joe

Scott Noon wrote:

> For me, ASSERTs are great provided that they’re used as they are
> intended to be used. They can make the assumed environment of a
> particular routine explicit to future maintainers of the code
> (including the original author), which makes things that much more
> future proof.

I completely agree and I will go a little further. An ASSERT does two
things:

  1. As Scott stated, an ASSERT helps document (in code) invariants
    for a function (or macro).

  2. In addition, an ASSERT can prevent further execution (by exiting or
    bugchecking) when the invariant is violated. This is important
    because, if the invariant is violated, you may have no guarantees
    how any further code execution will behave (both inside and after
    you exit the function in question). Thus exiting/bugchecking in
    an ASSERT macro can be very important to prevent data loss/corruption.

You can also have a couple of different ASSERT macros so that you can
control whether to exit/bugcheck depending on how bad the invariant
violation is. For example, you might have an ASSERT that only
exits/bugchecks on a debug/checked build and a RETAIL_ASSERT that always
exits/bugchecks because further execution could lead to data
loss/corruption
and you want to prevent that even on a retail build (in case some made a
mistake).

Now, this all being said, ASSERTs should never be used to do validation of
inputs that are not invariant checks. For instance, if I have a top-level
API that I expose externally, I should return STATUS_INVALID_PARAMETER (or
equivalent in another error space) on bad input. However, if I have an
internal function where I am in complete control of how it gets called, an
ASSERT on bad internal input is appropriate as it would be a bug in *my*
code.

IIRC, this is generally how I code ASSERT/RETAIL_ASSERT macros:

#if … // debug or checked build
#define ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))
#elif … // retail build w/logging for failed asserts
#define ASSERT(Expression) \
((Expression) ? 1 : \
LogAssertFailed(#Expression, FUNCTION, FILE, LINE))
#else // this a retail build where I do not want any overhead
#define ASSERT(Expression) 1
#endif

// RETAIL_ASSERT is always on.
#define RETAIL_ASSERT(Expression) \
((Expression) ? 1 : \
AssertFailed(#Expression, FUNCTION, FILE, LINE))

where:
AssertFailed logs the failure, stops execution, and returns 0
LogAssertFailed logs the failure and returns 0

Then you can do stuff like:

if (!ASSERT(Expression))
{
// We use the ASSERT macro here because this invariant violation
// is not too bad (will not corrupt data) and we can return
// a reasonable error from this function (due to how this
// function interface is specified). In a retail build w/o
// ASSERT logging, the ASSERT macro is a no-op and this code
// block is never executed. The code block can only be executed
// in debug/checked builds or in retail builds with ASSERT
// logging enabled.

}

  • Danilo

NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> - Failure of RevertToSelf

  • Failure of HeapUnlock on the process heap
  • Failure of HeapFree *
  • Stack overflow exception in exception handler

This is another song.

You’re speaking about handling the particular kinds of errors.

The initial talk was about the ways of error handling, from which one of the suggested was:

  • declare lots of variables from the beginning
  • NULL them all
  • in the very end, create an __except() block of such:
    if( var1 != NULL )
    Destroy(var1);

Why this way is better then return code checks - I don’t know. The chances of forgetting something are the same actually.

Also, I cannot understand what is the need to have MS’s C SEH combined with much better (but still inadequate) C++ SEH.

Also note that, in Windows, C++ is usually used for:

a) COM/OA/DCOM
b) to employ some libraries/frameworks like MFC.

Neither COM nor MFC rely on C++ exceptions on error handling (COM relies on HRESULT error code returns), so, using C++ SEH in such a code will introduce a paradigm mix and thus make the code lesser readable.

As about checking HeapFree for errors - it seems to be much better to use free() which cannot fail :slight_smile: If you need to check for leaks/invalid accesses (like the Verifier does) - then the usual way is to write wrappers around VirtualAlloc/Free/Protect which will do the thing.

Destruction calls cannot fail, this is a law. If they can - then they are misdesigned.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

It is far easier to adopt the absolute, and then discover where it doesn’t
apply, than to adopt the “It’s OK, sometimes” and leave it up to the
programmer to determine when the “sometimes” is valid, because under that
scenario, “sometimes” is “all the time”. Instead, if recovery is the only
acceptable solution, then you can find the rare exceptions more readily.

By the way, I was able to recover from heap corruption in my app. I did
the heap code, so the simple solution was to reset the heap to “fully
available” and then reconstruct the runtime structures from first
principles. This involved having a way to save the state so recovery was
possible. We never did find the source of the heap corruption, but the
belief was that we were having a double-bit-error in some early dynamic
memory which had only parity checking. So to me, I did not consider “heap
corruption” as a valid shutdown criterion.

It is essential to adopt the most robust possible solution, not the
easiest solution. And every case I’ve seen of exit(1) and all its guises
was always “it’s too hard to do recovery” when, in fact, it wasn’t all
that hard to get it right. I know this, because I fixed it in a small
number of hours’ effort.

So yes, I think in absolutes; I want the programmers to THINK before they
cause the program to exit, and realize that robustness is an important
parameter in many pieces of software, such as device drivers, realtime
data acquisition systems, editing tools, and the like.

The outcome was that I got really robust software from my team, because
after the first few times I proved to them that recovery was possible, and
wasn’t all that hard to get right (and we didn’t even have C++
exceptions!) then they really tried to make the software work. And in my
consulting, I regularly fixed robustness problems simply by removing all
exit() calls (and their various aliases) and putting proper recovery code
in.

So you have picked the extreme cases, where exiting just might be
justifiable, but the critical thing to remember is that termination is the
ABSOLUTE LAST RESORT, and everything possible must be done to avoid it.

I’m certainly not willing to accept heap damage in an app as being
sufficient criterion for termination; the first thing I’d do in an app
that had “mission critical” paths is to make sure that the heap would not
be a problem. And yes, this means a lot of std:: won’t work, and that’s
why it requires actual WORK to make the code robust. If you’ve ever lost
three hours’ work because the app or system crashes, you have a metric of
the pain that has to be dealt with (and yes, if you have termination, it
is critical that it not lose data, and techniques such as checkpointing
become important). And the users aren’t going to ask “how much would it
cost to improve the robustness of this app”; they are going to say “Fix
the damned thing or I look at the competition”. Or say “Forcing me to pay
for an upgrade so I get bug fixes for all the bugs you left in the first
version is extortion” and your product’s reputation suffers.
joe

I am not sure which post this is exactly in response to, but I continue to
object to your use of absolute terms here.

Did you read my example? It is not a situation where a specific function,
library, module or has failed because of a data inconsistency, but a
situation where operating environment of the whole program has been
corrupted. Depending on how bad the damage is, it may not be possible to
detect this at all, but if it is detected, what should a responsible
programmer do? Surely the answer isn’t “let’s ignore the problem and
continue execution so we can corrupt yet more stuff”. The environment may
be
so corrupted that even trying to terminate may fail and corrupt stuff
(i.e.
execute garbage instead of the expected terminate code), but there isn’t
much that can be done about that except hope that the next fault will be
detected by a lower level that can terminate.

I am not suggesting that this should be the normal method of handling all
errors, indeed far from it, but I am challenging your absolute statement
that there is never a situation where termination is correct. I have given
a
few examples of situations in Windows software where I believe that
termination is the correct response, and unless you can provide specific
alternatives for those situations, or explain how they are impossible in
correctly designed applications, I can’t see that an absolute
prohibition is
reasonable. Again, some examples from UM programming on Windows

  • Failure of RevertToSelf
  • Failure of HeapUnlock on the process heap
  • Failure of HeapFree *
  • Stack overflow exception in exception handler

The only causes for RevertToSelf failure are memory corruption or
mismatched
calls. In either case the thread may be executing as a security principle
other than the main one for the process, so any further execution, even
raising an exception or returning failure is a security risk

The only causes for HeapUnlock failure are memory corruption of mismatched
calls. The default process heap is used by many components in Windows
applications. Even when one’s code takes special care to avoid using the
default process heap, threads injected into the process by Windows or
other
libraries still use that heap. If the application performs some memory
analysis, for the purpose of updating a performance counter let’s say,
that
requires locking the process heaps and the call to HeapUnlock fails, then
the only courses for execution to take are deadlock or arbitrary memory
corruption because either the lock itself is hosed or the data it guards
is.

HeapFree has an asterisks because this one can fail for more reasons and
often it doesn’t fail but internally calls LogHeapFailure ->
RtlReportCriticalFailure before any failure code would be executed

Stack overflow exceptions are hard to handle under the best of
circumstances. It is not too bad if one can assume that the exception will
only be raised from a function prologue, but _alloca, assembly functions
and
optimized code can result in stack expansion at arbitrary points in the
instruction stream and writing an effective handler is only made possible
by
the API SetThreadStackGuarantee, but if the size is miscalculated and a
stack overflow exception is raised while handling an exception, then the
thread is in a non-continueable state and it is impossible to execute any
unwind handlers or other failure code. And because the thread was
executing
arbitrary code before the initial exception and within the exception
handler, it is impossible to determine if it is safe to simply spin or
kill
this thread because there is no information on what locks may be held or
what state any data structures may be in
Again, the purpose of pointing out these abstruse cases is to make you
modify your statement from an absolute prohibition into the qualified
‘don’t
do that because it is a really bad idea except in rare and obscure cases’

wrote in message news:xxxxx@ntdev…

The “blinders” approach is unacceptable. This is the belief that because
YOUR code is confused, ALL code is confused, and therefore it is
legitimate to exit the application. I have found this to be, without
exception, a colossal blunder. So the database index is screwed up. Stop
operations on that database. But the realtime data acquisition thread is
still happy, and if you exit the app, you lose live data. So it makes
ZERO sense to terminate the application. Even if attempts to use the
database could corrupt the actual database, then stop using the database.
But the realtime data collection thread, which is NOT using the database,
is supercritical, and if someone writing the database library thinks that
exiting is correct, they are so completely out of touch with reality that
they might as well be on drugs.

And talking about the difference between external APIs and internal
functions is meaningless. If my API calls a complex set of internal
functions, any failure, for whatever reason, must reflect out to the API.
Even if the error code is “fatal data consistency error”. But existing is
NOT an option. I’ve fought these amateur libraries for years, and I have
yet to find a place where simply reflecting the error condition back to
the caller would have been the wrong decision.

So, if I were working in C, I’d implement FailedAssertion to include
RaiseException, or in C++ I’d use a throw. Note that it becomes the
responsibility of intermediate layers to guarantee that in the presence of
exceptions that their invariants are preserved, making it hard to
mix-and-match C and C++ universes.

But having spent years fighting these poor decisions, and an entire year
implementing an exception-based completely bulletproof software component,
I simply cannot imagine any situation in which termination makes sense for
an app. OS inconsistencies which are indicative of OS data structure
corruption is a different situation, but I’d rather have “file system is
wonky” come back to my app as “unrecoverable internal error”, because the
file system in which I am logging realtime data might not be the same file
system, or device, on which the user is doing something else, such as data
mining of previuosly-existing data.
joe

> Scott Noon wrote:
>
>> For me, ASSERTs are great provided that they’re used as they are
>> intended to be used. They can make the assumed environment of a
>> particular routine explicit to future maintainers of the code
>> (including the original author), which makes things that much more
>> future proof.
>
> I completely agree and I will go a little further. An ASSERT does two
> things:
>
> 1) As Scott stated, an ASSERT helps document (in code) invariants
> for a function (or macro).
>
> 2) In addition, an ASSERT can prevent further execution (by exiting or
> bugchecking) when the invariant is violated. This is important
> because, if the invariant is violated, you may have no guarantees
> how any further code execution will behave (both inside and after
> you exit the function in question). Thus exiting/bugchecking in
> an ASSERT macro can be very important to prevent data
> loss/corruption.
>
> You can also have a couple of different ASSERT macros so that you can
> control whether to exit/bugcheck depending on how bad the invariant
> violation is. For example, you might have an ASSERT that only
> exits/bugchecks on a debug/checked build and a RETAIL_ASSERT that always
> exits/bugchecks because further execution could lead to data
> loss/corruption
> and you want to prevent that even on a retail build (in case some made a
> mistake).
>
> Now, this all being said, ASSERTs should never be used to do validation
> of
> inputs that are not invariant checks. For instance, if I have a
> top-level
> API that I expose externally, I should return STATUS_INVALID_PARAMETER
> (or
> equivalent in another error space) on bad input. However, if I have an
> internal function where I am in complete control of how it gets called,
> an
> ASSERT on bad internal input is appropriate as it would be a bug in *my*
> code.
>
> IIRC, this is generally how I code ASSERT/RETAIL_ASSERT macros:
>
> #if … // debug or checked build
> #define ASSERT(Expression) \
> ((Expression) ? 1 : \
> AssertFailed(#Expression, FUNCTION, FILE, LINE))
> #elif … // retail build w/logging for failed asserts
> #define ASSERT(Expression) \
> ((Expression) ? 1 : \
> LogAssertFailed(#Expression, FUNCTION, FILE, LINE))
> #else // this a retail build where I do not want any overhead
> #define ASSERT(Expression) 1
> #endif
>
> // RETAIL_ASSERT is always on.
> #define RETAIL_ASSERT(Expression) \
> ((Expression) ? 1 : \
> AssertFailed(#Expression, FUNCTION, FILE, LINE))
>
> where:
> AssertFailed logs the failure, stops execution, and returns 0
> LogAssertFailed logs the failure and returns 0
>
> Then you can do stuff like:
>
> if (!ASSERT(Expression))
> {
> // We use the ASSERT macro here because this invariant violation
> // is not too bad (will not corrupt data) and we can return
> // a reasonable error from this function (due to how this
> // function interface is specified). In a retail build w/o
> // ASSERT logging, the ASSERT macro is a no-op and this code
> // block is never executed. The code block can only be executed
> // in debug/checked builds or in retail builds with ASSERT
> // logging enabled.
> …
> }
>
> - Danilo
>
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

The problem of mix-and-match of exception handlers becomes important for
library writers; for example, should I do RaiseException or throw in my
library? Well, since I’m a C++ user and don’t care about C very much, I’d
do a throw, but as a general library writer, trying to support both
languages, I’d be in trouble.

Now, suppose I simply give a pointer to an error handler routine, which is
called with a few parameters to identify the error (think KeBugCheckEx).
Then, C programmers would write code that did RaiseException, and C++
programmers would do throw. This works only if the library is stateless
in the sense that it does not need to roll back a partially-computed
state. There, I need to have either __try/__except or try/catch, and now
I’ve lost portability again.

By the way, what is “inadequate” about C++ exception handling? Or what is
your list? Every solution I’ve seen to “fix” the “inadequacies” of some
exception handler has resulted in a design so complex and baroque that it
doesn’t work at all. Or cannot be made to work reliably. Doing
exceptions right is hard; Java is one of the few languages that actually
got it right. Or at least righter than C++.

Note: COM interfaces in C++ throw exceptions, which results in some really
messy code to handle it. I discovered this when I built the interface to
PowerPoint. And the fact that an exception is thrown is not documented!
You can see what I had to do if you download the source for my PowerPoint
Indexing program from my Web site. Part of the challenge is to produce an
error message that correlates to a source line that did the call that
threw the exception, because this tells how much unwinding of partial
computations may need to be done (which is in MY list of problems with C++
exceptions).

Actually, free() CAN fail, but it does so catastrophically, making the
handling of such an error next to impossible in C.

It is not clear why destruction calls should not be allowed to fail. Any
robust allocator is going to be checking for heap integrity on release
(I’ve built several allocators, and the assertion issues that identify the
integrity of the heap are important. How can you free something that is
not a valid address?)
joe

> - Failure of RevertToSelf
> - Failure of HeapUnlock on the process heap
> - Failure of HeapFree *
> - Stack overflow exception in exception handler

This is another song.

You’re speaking about handling the particular kinds of errors.

The initial talk was about the ways of error handling, from which one of
the suggested was:

  • declare lots of variables from the beginning
  • NULL them all
  • in the very end, create an __except() block of such:
    if( var1 != NULL )
    Destroy(var1);

Why this way is better then return code checks - I don’t know. The chances
of forgetting something are the same actually.

Also, I cannot understand what is the need to have MS’s C SEH combined
with much better (but still inadequate) C++ SEH.

Also note that, in Windows, C++ is usually used for:

a) COM/OA/DCOM
b) to employ some libraries/frameworks like MFC.

Neither COM nor MFC rely on C++ exceptions on error handling (COM relies
on HRESULT error code returns), so, using C++ SEH in such a code will
introduce a paradigm mix and thus make the code lesser readable.

As about checking HeapFree for errors - it seems to be much better to use
free() which cannot fail :slight_smile: If you need to check for leaks/invalid
accesses (like the Verifier does) - then the usual way is to write
wrappers around VirtualAlloc/Free/Protect which will do the thing.

Destruction calls cannot fail, this is a law. If they can - then they are
misdesigned.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> library writers; for example, should I do RaiseException or throw in my

library?

No libraries in unmanaged languages (except those like STL or ATL which are a set of sources rebuilt each time) should ever raise C++ exceptions.

Reason: C++ exception implementation depends not only on the compiler’s brand, but on its version number and command-line parameters.

Actually, the good taste of programming in Windows is to only use a) C exports b) COM as the library APIs. Both are well-defined and language-independent.

do a throw, but as a general library writer, trying to support both
languages, I’d be in trouble.

No troubles at all. Support C only, it will be automatically C+±compatible.

exceptions right is hard; Java is one of the few languages that actually
got it right. Or at least righter than C++.

Correct.

Some additions:

  1. C# is also such, not only Java

  2. C++ exceptions are just plain inadequate without RAII. And, usually, wrapping everything to RAII (including stuff like tmp memory buffers for IOCTLs and such, which are major fun for RAII) is considered to be much larger effort then just fixing bugs in the code which does “return NtStatus;” or “return hr;”.

  3. MS’s C exceptions are a toy. SEH frames in the language which has no destructors is just a punk-style fun. With C SEH, the amount of error recovery code to be typed is the same as with “goto HandleError1; goto HandleError2;” code.

Actually, looks like MS’s SEH was developed with the only real practical purpose in mind: to allow touching ->UserBuffer in drivers without making a copy of it.

Yes, you need SEH-style stuff for METHOD_BUFFERED too, but it can be hidden in the kernel and not published to the outside world to avoid confusion :slight_smile: this is what Linux does with __copy_to_user() (Linux does not support METHOD_NEITHER so it is safe).

Also one another drawback of SEH of any kind: in usual style, if I see the call without the if( !NT_SUCCESS(…) ) wrapper, then I know that this call has no-fail guarantee.

With SEH, it is not so. The “throws” spec helps only a bit: first, its implementation on C++ is pathetic. Second, you need to look at the function declaration to see whether is has no-fail guarantee.

The presense of try/catch and try/except operators just plain does not reduce your effort of proper error handing.

Note: COM interfaces in C++ throw exceptions

No COM interface can throw C++ exceptions. To begin with, the COM’s remoting/marshalling layer cannot support them.

If the COM component throws across the interface boundary, then it is broken.

There is ISupportErrorInfo/DISP_E_EXCEPTION OA exceptions, but they are not C++ ones and require translation layer (there are macros in ATL for this).

Actually, free() CAN fail, but it does so catastrophically, making the
handling of such an error next to impossible in C.

free() cannot fail. It just has no failure modes (C function, returns “void”, has no callbacks) except crashing everything with impossible recovery from it.

Also note: recovering from the heap corruption is useless. The proper thing is to fix the bugs which lead to heap corruption.

It is not clear why destruction calls should not be allowed to fail.

How will you recover from destructor failure? For instance, in C++ SEH, if the RAII destructor raises itself, then abnormal_terminate() occurs. The RAII/SEH machinery just cannot continue from such a failure.

And this is correct. Destructors must not fail, this is the obvious law.

Any robust allocator is going to be checking for heap integrity on release

…if special debug mode is turned on (like what Verifier does) - then yes.

Production allocators should never do this due to very major perf hit.

integrity of the heap are important. How can you free something that is
not a valid address?)

Crash the whole app and let the developer fix this bug.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

"On 09-Aug-2012 08:59, Maxim S. Shatskih wrote:

Why this way is better then return code checks - I don’t know. The chances of forgetting something are the same actually.

+1. But at least this is a visual “pattern” for a reviewer that helps
understand the intent. Or call it anti-pattern :slight_smile:

Also, I cannot understand what is the need to have MS’s C SEH combined with much better (but still inadequate) C++ SEH.

IMHO this is simple: MS’s SEH has been baked into Windows very long ago,
when c++ wasn’t what it is now.
SEH was designed to support other languages, Delphi for one.
So they’ve built it… but C++ did not come.
Why? not because SEH was bad, but just because it is proprietary.
So today we have (almost) the bright new c++ v.11 and the old good SEH.
And then people ask why Visual Studio treats c++ as 2nd class citizen.
That’s why…

Destruction calls cannot fail, this is a law. If they can - then they are misdesigned.

“Destructor cannot fail” means that it is not expected to return a
result that the caller is obliged to check. But things still can go
wrong, then you have a choice: ignore or reboot the matrix :wink:

– pa

There are all kinds of problems even with C SEH, such as responding to
hardware errors like access fault, divide by zero, etc. The major problem
is that SEH will not do proper stack cleanup like C++ exceptions, so

__try… C++ fn A —> C++ fn B –> C++ fn C; RaiseException; __except(…)

will not call destructors for local variables declared in A, B and C, but
will immediately transfer control to the matching __except. This means
that most common constructs in C++, such as std::vector (to name the
simplest) will not be cleaned up, and you will leak storage on every
RaiseException. In addition, if the destructors are involved in more
serious state maintenance, you how have problems with the integrity of the
entire app.

In the past, this was avoided in the kernel because there was no C++ code.
The fact that exceptions are (at least the last I heard) not supported in
kernel C++ code is likely caused by the mix-and-match paradigm.

Code optimization is another nightmare, and the notion of synchronous vs.
asynchronous exception handling comes into play here. Doug Harrison wrote
a detailed article on C/C++ exception handling, and I’d suggest googling
for it. Doug was for many years (and may still be, for all I know) a C++
MVP.
joe

In my experience, the recommendation would be don’t mix exception schemes
in
a single module. That way, each module looks after itself and the
potential
for bad interactions is minimized. MSDN has a fair description of the
exception handling changes for x64, which implies the kind of information
that you want, but doesn’t spell it out. I don’t know of anywhere else
specifically

“Pavel A” wrote in message news:xxxxx@ntdev…

On 08-Aug-2012 10:07, Maxim S. Shatskih wrote:

>> MS’s C SEH is just another way of nesting “goto” operators. Nothing
>> else.
>> They do not improve anything, just replacing one ugly coding style with
>> another :slight_smile:
>
> It is understandable that standard c++ libraries cannot rely on a
> proprietary MS technology on non-Windows platforms.
> Thus native c++ exceptions must use a technique independent from SEH and
> Microsoft compilers that support it (even on Windows).
> Modern MSC++ compilers even forbid mixing SEH and C++ exceptions in one
> function.
>
> So, does anyone know a good research, article, blog post that explains
> in sufficient depth how code written for C++ exceptions can be combined
> with code that uses SEH, and what are recommended usage patterns?
>
> – pa
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

SHE and code optimization has been known for a long time. Actually
there is a classic paper on code optimization in the face of exceptions
which points out a lot of problem. This was presented in the 70’s at
one of the first Principal of Programming Languages conferences. Of
course Microsoft C and C++ ignore the paper so if you ever use SEH do
not rely on values of variables that could be touched in the protected
code, in the exception handler or vice versa.

Don Burn
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr

xxxxx@flounder.com” wrote in message
news:xxxxx@ntdev:

> There are all kinds of problems even with C SEH, such as responding to
> hardware errors like access fault, divide by zero, etc. The major problem
> is that SEH will not do proper stack cleanup like C++ exceptions, so
>
>
> try… C++ fn A —> C++ fn B –> C++ fn C; RaiseException; except(…)
>
>
> will not call destructors for local variables declared in A, B and C, but
> will immediately transfer control to the matching __except. This means
> that most common constructs in C++, such as std::vector (to name the
> simplest) will not be cleaned up, and you will leak storage on every
> RaiseException. In addition, if the destructors are involved in more
> serious state maintenance, you how have problems with the integrity of the
> entire app.
>
> In the past, this was avoided in the kernel because there was no C++ code.
> The fact that exceptions are (at least the last I heard) not supported in
> kernel C++ code is likely caused by the mix-and-match paradigm.
>
> Code optimization is another nightmare, and the notion of synchronous vs.
> asynchronous exception handling comes into play here. Doug Harrison wrote
> a detailed article on C/C++ exception handling, and I’d suggest googling
> for it. Doug was for many years (and may still be, for all I know) a C++
> MVP.
> joe
>
> > In my experience, the recommendation would be don’t mix exception schemes
> > in
> > a single module. That way, each module looks after itself and the
> > potential
> > for bad interactions is minimized. MSDN has a fair description of the
> > exception handling changes for x64, which implies the kind of information
> > that you want, but doesn’t spell it out. I don’t know of anywhere else
> > specifically
> >
> > “Pavel A” wrote in message news:xxxxx@ntdev…
> >
> > On 08-Aug-2012 10:07, Maxim S. Shatskih wrote:
> >
> >> MS’s C SEH is just another way of nesting “goto” operators. Nothing
> >> else.
> >> They do not improve anything, just replacing one ugly coding style with
> >> another :slight_smile:
> >
> > It is understandable that standard c++ libraries cannot rely on a
> > proprietary MS technology on non-Windows platforms.
> > Thus native c++ exceptions must use a technique independent from SEH and
> > Microsoft compilers that support it (even on Windows).
> > Modern MSC++ compilers even forbid mixing SEH and C++ exceptions in one
> > function.
> >
> > So, does anyone know a good research, article, blog post that explains
> > in sufficient depth how code written for C++ exceptions can be combined
> > with code that uses SEH, and what are recommended usage patterns?
> >
> > – pa
> >
> >
> > —
> > NTDEV is sponsored by OSR
> >
> > For our schedule of WDF, WDM, debugging and other seminars visit:
> > http://www.osr.com/seminars
> >
> > To unsubscribe, visit the List Server section of OSR Online at
> > http://www.osronline.com/page.cfm?name=ListServer
> >

> IMHO this is simple: MS’s SEH has been baked into Windows very long ago,

when c++ wasn’t what it is now.

C++ SEH was there in MSVC 4 in year 1995.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

> __try… C++ fn A —> C++ fn B –> C++ fn C; RaiseException; __except(…)

will not call destructors for local variables declared in A, B and C

set_se_translator() solves this.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

> which points out a lot of problem. This was presented in the 70’s at

one of the first Principal of Programming Languages conferences.

Stroustrup/Ellis wrote that C++ SEH was designed based on some work of 1980ies.

So, SEH is as old as 70ies?


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com