The real usage of macros such as _In_, _Out_, _IRQL_requires_max_, etc in driver development?

I have seen some very professional looking open source windows driver projects that use macros such as _In_, _Out_, Irql macros such as _IRQL_requires_max_, etc.

my questions are:

  1. Do you guys also use the IRQL macros such as _IRQL_requires_max_ for all of your functions? if so, then do you manually inspect every API call in your function and the functions that this function call and so on to find the maximum possible IRQL, or…?
  2. What are the list of tools that can utilize/use these macros to analyze a windows driver source code and find bugs in it? Is getting utilized by source code analyzer tools the real purpose of these macros?
  3. Other than function argument direction macros (such as _In_) and IRQL macros (such as _IRQL_requires_max_), is there any other macro that can help code analyzer tools and also make the code more professional and easy to understand/debug?

Those are SAL, the “source-code annotation language”. I DO actually try to use IN and OUT, just because it’s good documentation. I have not become religious about the other annotations, although I sometimes feel guilty about that. Since the macros generally compile to nothing, yes, they are intended for consumption by source code analysis tools.

https://learn.microsoft.com/en-us/cpp/code-quality/understanding-sal?view=msvc-170

IMHO SAL is one of the best things that a programmer can do to avoid errors. I spend most of my time reviewing code written by others, and I insist that my staff use the annotations. In, Out, In_reads_, Out_writes_to and the various variants are very useful in determining the relationship between different parameters and the array / buffer bounding conditions. And Check_return and Success are important in clarifying function usage. One other side effect of rigorous use of SAL, is that any function contract that is hard to annotate is probably a bad design

Unfortunately, SAL is not widely adopted and Microsoft’s compiler modernization project significantly reduced the effectiveness of the checks it can perform. Even with reduced effectiveness with automated tools, they still represent important documentation that stays right in the code

The locking, interlocked and IRQL macros hold another level of promise, but the tools are so bad so their usefulness is severely undermined

1 Like

Classic blog post on this topic here: https://www.osr.com/blog/2015/02/23/sal-annotations-dont-hate-im-beautiful/

Peter

It should also be noted that most function parameters should also be declared as const.

To do what exactly, in C? It only makea debugging harder because of how
storage is used.

I don’t know how it makes debugging harder, but if you have a function like

void Func1(In SOME_STRUCT* pSomething)

it is legal to write something like

pSomthing++;
or
pSomething = pSomthingElse;

sometimes this make sense. If pSomething is really an array, especially a string array, there is lots of code that does this. That’s where the In_reads annotations should come in. But when pSomething is really a pointer to a specific structure in memory, writing the function signature like this

void Func1(In SOME_STRUCT* const pSomething)

generates a compile error if the code reassigns its value. You can still do it in the debugger of course (it is still a local variable on the stack), but it makes it harder to do accidentally.

With this definition, if Func1 needs to modify memory through pSomething it still can. If it does not, then the function can be written

void Func1(In const SOME_STRUCT* const pSomething)

And then Func1 can’t modify memory through that pointer. Callers can still pass non-const pointers to Func1 - the const qualifier is added.

Many struct definitions include MACRO versions that wrap the const qualifiers. I prefer to write them directly, but that’s a matter of taste.

There is also an effect on compiler optimization. Const arguments allow various compiler optimizations especially elimination of aliasing. Modern compilers are much better at finding most of these anyways, but it is work mentioning.

Assignment of pointed values is ok in my book (or preventing it).
It is the variable itself being const that I find to give me pain in
debugging, because a local variable needs to be used sometimes (can’t ++
const var), but the compiler optimozes the original (if it is not used
later in the code), but WinDBG still shows it, but not the local var…

It makes a mess. IMO, drivers are not large enough that any developer can’t
know the entire code (not maybe completely, but should know the flow), and
any const qualifier even is, again IMO, a hindrance.

Dejan.

You obviously have a specific situation in mind that I don’t fully understand. Debugging optimized code is always a challenge.

My work is mainly in reviewing the work of others and I find that forcing the use of SAL and const qualifiers dramatically reduces the bug / error rate in the code before it comes to me. It also makes it easier to integrate the work of several developers in a consistent way. I grant that value of these things probably increases with the size and complexity of the software.

I find that forcing the use of SAL and const qualifiers dramatically reduces the bug / error rate in the code before it comes to me
Yes, absolutely makes the code more consistent and improves the quality.

My biggest complaint is that every time I’ve tried to annotate my own locking functions it was absurdly complicated to get right. That is typically where real value can be gotten out of static analysis, the other lower hanging fruit can be picked, generally, using -W4 -WX and compiling in C++ mode.