Checking the user-mode app's digital signature for secure communication with the user-mode?

@brad_H

In this scenario, if i am checking the digital signature of the process that is sending me the IOCTL, you CANNOT bypass is, literally.

This is incorrect. First of all, an admin can install any driver and circumvent any protection you will come up with, and this is pretty easy when there are many third party signed drivers that simply provide an IOCTL interface to read/write memory in kernel mode - you don’t even have to sign your own driver. This allows the attacker to install any driver and “remove” your ObRegisterCallbacks protection for example or do whatever he wants.

Moreover, You will soon realize that the suggested “ObRegisterCallbacks protection” is never a 100% protection since there are legitimate windows services that need to open a handle to your process for all sorts of purposes, so you will have to allow them to open a handle if you don’t want to cause serious system instability - meaning that if someone injects to these processes (such as a critical svchost) he will be able to inject into your “signed” IOCTL caller from there.

What you are doing here is security by obscurity, which probably isn’t worth it (Depending on who you ask and on your specific requirements, different people have different opinions about this subject). You need to understand that if someone tried to write code to invoke an IOCTL in your driver, he will probably also be able to inject code to your “signed” process and run it from there if he wants - it’s not hard.

For example, just to show you these things are pretty easy and well known, look at this project here: https://github.com/notscimmy/libelevate

Just to summarize: If I were you, I would use an ACL to protect the device, and that’s it.

Also, Injection can be done without opening a handle to your process - I guess you have dependencies on DLLs in your “signed app” right? Remember an admin attacker can simply replace any DLL on disk with his own DLL…

@0xrepnz said:
@brad_H

In this scenario, if i am checking the digital signature of the process that is sending me the IOCTL, you CANNOT bypass is, literally.

This is incorrect. First of all, an admin can install any driver and circumvent any protection you will come up with, and this is pretty easy when there are many third party signed drivers that simply provide an IOCTL interface to read/write memory in kernel mode - you don’t even have to sign your own driver. This allows the attacker to install any driver and “remove” your ObRegisterCallbacks protection for example or do whatever he wants.

Although i agree that if a user is already an admin, at the end of the day he can always bypass protections, but you have to realize my goal here is not to secure a system from malicious users, but only to do my job as a driver developer of securing my communications. Just don’t want my driver to end up as one of those poorly written drivers that gets used by rootkits as a way to bypass driver signature enforcement. if i make it extremely difficult for them use my driver for malicious activities, then they will basically switch to another driver.

my goal here is not to secure a system from malicious users, but only to do my job as a driver developer

Then your efforts are being badly misdirected.

Put an ACL on you device object, as has been suggested here multiple times, and you’re done. That’s considered the standard for best practice. Anything beyond that is just window dressing, unless you want to go with a full-on threat analysis and a set of countermeasures focus on your identified threats.

Peter

Remember too that an admin can arbitrarily affect the way that signatures get checked and which ones are acceptable. Exceptions exist, but administrators routinely take advantage of this to trust their internal private certificates in addition to those that the public CAs provide.

An ACL check is no more or less secure than a signature check, but they check different things. A signature check is like a more sophisticated CRC - excellent for detecting corruption or unauthorized modification, but almost irrelevant for security. ACLs don’t even attempt to detect corruption, but they do enforce security across security boundaries. The assumption has to be made that the code enforcing the security boundary has not been compromised. Think about it and you’ll understand why

Just don’t want my driver to end up as one of those poorly written drivers that gets used by rootkits as a way to bypass driver signature enforcement

The main issue about “security” is that it’s almost always a trade-off with something else such as performance, reliability, stability, usability, etc. There is no supported way to perform “WinVerifyTrust” from kernel mode so basically implementing this will require a complex piece of code to develop, maintain and support. I think that in this case, the quality of your driver is much more important than stopping some rootkit developer from abusing your driver. As you said, they will simply move to the next driver they have so what’s the point? I think it’s not worth the potential issues you will encounter in this complex piece of code. Moreover, as we already agreed, this is not really any kind of security boundary you’re trying to protect here, it’s just a trick you want to do in order to “warn off” rootkit developers…

> @brad_H said: > Although i agree that if a user is already an admin, at the end of the day he can always bypass protections, but you have to realize my goal here is not to secure a system from malicious users, but only to do my job as a driver developer of securing my communications. Just don’t want my driver to end up as one of those poorly written drivers that gets used by rootkits as a way to bypass driver signature enforcement. if i make it extremely difficult for them use my driver for malicious activities, then they will basically switch to another driver. As long as you don’t add IOCTLs that allow to read/write all of physical memory like some of these poorly written drivers, then you should be fine.

If you want to know what you should not do, you can check out the drivers in this repo https://github.com/hfiref0x/KDU

@brad_H said:
Well, as i said, in this scenario lets assume that i can protect my files and processes from modification (since I’m already in kernel and we assume the attacker doesn’t have a driver, if he did then its obviously game over)

You can protect files, but you can’t actually protect your running process nearly as well as you think you can.

That said, it appears that nothing anyone here says about that is going to change your mind, so I’m out.

I am really not going to comment on stuff here, because there would be too much to say.

The least I will say @brad_H is that you are very wrong with many things, especially ObCallbacks and virtualization like VMProtect.

To answer your question:

  • You can check for signed binaries in Kernel, Windows does it, always had since VISTA. Look at ci.dll exported function CiCheckSignedFile, it will do everything you want.
  • You cannot “secure” IOCTL’s with this, find another method of communication, like shared mapping. Or even simpler, encrypt your data both ways.

To give you a hand, do note that ci.dll has changed a lot until Windows 10 and there is no documentation for this. You have to manually reverse the structs and function parameters, generate the .lib file and link against your driver.

dumpbin /EXPORTS c:\windows\system32\ci.dll
lib /def:ci.def /machine:x64 /out:ci.lib

Enjoy.

You can check for signed binaries in Kernel, Windows does it

The discussion here is not about whether it’s possible or not, in kernel mode almost everything is possible but is it the right design choice?

Everyone here simply mentioned it’s not supported, and not worth the effort required to maintain this piece of code… Supporting code that uses undocumented structures + functions that often change is not as simple as “making it work in a test environment”. The signature of this function (CiCheckSignedFile) was changed multiple times, so if you use it you have to adjust your code to the OS version and also keep testing your product against insider builds to make sure the function is not changed so you don’t cause BSODs…

This.

@Mecanik said:
I am really not going to comment on stuff here, because there would be too much to say.

The least I will say @brad_H is that you are very wrong with many things, especially ObCallbacks and virtualization like VMProtect.

To answer your question:

  • You can check for signed binaries in Kernel, Windows does it, always had since VISTA. Look at ci.dll exported function CiCheckSignedFile, it will do everything you want.
  • You cannot “secure” IOCTL’s with this, find another method of communication, like shared mapping. Or even simpler, encrypt your data both ways.

To give you a hand, do note that ci.dll has changed a lot until Windows 10 and there is no documentation for this. You have to manually reverse the structs and function parameters, generate the .lib file and link against your driver.

dumpbin /EXPORTS c:\windows\system32\ci.dll
lib /def:ci.def /machine:x64 /out:ci.lib

Enjoy.

Doing this using the ci.dll is a very risky way of doing it considering most of it is undocumented and the structures change vastly from time to time, its basically not a possible solution for a product.

@0xrepnz said:

You can check for signed binaries in Kernel, Windows does it

The discussion here is not about whether it’s possible or not, in kernel mode almost everything is possible but is it the right design choice?

Everyone here simply mentioned it’s not supported, and not worth the effort required to maintain this piece of code… Supporting code that uses undocumented structures + functions that often change is not as simple as “making it work in a test environment”. The signature of this function (CiCheckSignedFile) was changed multiple times, so if you use it you have to adjust your code to the OS version and also keep testing your product against insider builds to make sure the function is not changed so you don’t cause BSODs…

I disagree, by “supported” I`m sure you mean if Microsoft creates documentation and says “we support this”, then it’s “supported”? That’s not how I see it, at all.

There are literally a dozen functions inside NT that haven’t changed (much) since XP, and people still use them. In fact, is it more often better to use “undocumented” functions/functionality for several reasons.

Example: https://social.msdn.microsoft.com/Forums/vstudio/en-US/a084abc6-60c7-476e-92c6-5930856ca70d/win32-getversionex-returns-wrong-informatiion?forum=vcgeneral

Instead of using the “documented” GetVersion(), it is better to use RtlGetVersion() + RtlGetNtVersionNumbers(). No “manifest” needed, no bonanza.

Another example, I use only undocumented functions and system calls in my real life applications; and they are being used by hundreds of users. I simply hate Win32 API, they are slow, can be intercepted easily, etc.

So I really disagree with this “not supported” bonanza, we are 2 decades away from functions listed as “deprecated” on Microsoft website but they are still there, they still work, and that’s that.

As for the ci.dll, the only issue is that Microsoft did not post any documentation about this, hence why people say “not supported”. One developer can simply open ci.dll in IDA and the .pdb is available from Microsoft; making reversing / exporting of these functions quite easy.

Once you created your import .libs, they won’t change. There are many Windows editions released that heavily rely on these functions and Microsoft 100% will not re-invent the wheel and change them.

You can reliably use “undocumented” functions (up to point), but of course you need to understand what you are doing.

@brad_H said:

@Mecanik said:
I am really not going to comment on stuff here, because there would be too much to say.

The least I will say @brad_H is that you are very wrong with many things, especially ObCallbacks and virtualization like VMProtect.

To answer your question:

  • You can check for signed binaries in Kernel, Windows does it, always had since VISTA. Look at ci.dll exported function CiCheckSignedFile, it will do everything you want.
  • You cannot “secure” IOCTL’s with this, find another method of communication, like shared mapping. Or even simpler, encrypt your data both ways.

To give you a hand, do note that ci.dll has changed a lot until Windows 10 and there is no documentation for this. You have to manually reverse the structs and function parameters, generate the .lib file and link against your driver.

dumpbin /EXPORTS c:\windows\system32\ci.dll
lib /def:ci.def /machine:x64 /out:ci.lib

Enjoy.

Doing this using the ci.dll is a very risky way of doing it considering most of it is undocumented and the structures change vastly from time to time, its basically not a possible solution for a product.

Everything is “risky”, even “documented” functionality. That’s not the point, and that doesn’t stop you from using something that’s already there.

In this case, ci.dll has not changed that much besides new functions being added; structures seem the same since Win 7 (tested) tp Win 10 (tested).

Spoken like an experienced student, or a researcher. Who writes software used by hundreds of users. Not to be confused with “hundreds of thousands” or “millions” :slight_smile:

I don’t have time to write a diatribe on this…. But, the point of something being “supported” is that the functionality is documented and the dev owner has agreed they won’t change the contract inherent in the function. This isn’t Linux where we change the parameters of kernel functions between releases. Even more importantly, when a function is “supported” it means you can expect it to properly achieve its documented goal. It acquires the necessary locks. It works at the documented IRQLs. The dependencies of undocumented functions are… undocumented.

This is not to say that you can never use ANY undocumented function. There are, indeed, some functions that have been around “a long time”… have not changed… are well known… and provide utility that’s not otherwise available. It’s a matter of risk, as you recognize. What type of software you write, what that software is used for, the environment in which its used, the consequences of failure, and the ultimate users of the software all impact what level of risk is acceptable. If you write some gritty utility that “hundreds of people” people can download from GitHub… that’s one level of risk. If you write software that’s run on millions of desktops and servers worldwide, that’s a different level of risk. If your software controls some flying contraption, with or without people on board, that’s yet another level of risk.

We all get to choose, evaluate, and determine the level of risk with which we are comfortable. That’s called “engineering.”

Peter

1 Like

@Mecanik:

There are literally a dozen functions inside NT that haven’t changed (much) since XP, and people still use them. In fact, is it more often better to use “undocumented” functions/functionality for several reasons.

I did not say that using any undocumented API is wrong… It’s always a trade-off that you need to think about. If there’s a benefit to using the undocumented API, You have to “evaluate the risk” and decide if it’s worth it, as Peter said. Evaluating the risk can be very hard at times and requires YEARS of experience with windows kernel and knowledge about potential issues and edge cases. We just suggested that in this specific case it’s not worth it, You can read my previous comments to understand why it’s not worth it… In my opinion.

I simply hate Win32 API, they are slow, can be intercepted easily, etc.

1 - the “can be intercepted easily” argument is just plain incorrect. Moreover, even if it would be correct, why do you care?
2 - “hate Win32 API” - well, nobody “loves” it… but is it a reason to take the risk?
3 - “they are slow” - Design decisions that are based on “runtime performance” reasons, should be backed by tests. If you perform a test and
observe a performance overhead that is caused by a Win32 API (And, this issue is solved by switching to a undocumented API) this may be a good
reason to use this undocumented API, after evaluating the risk. Simply rejecting ALL the Win32 API because it’s “slow” is not a good reason in my
opinion - this is called “premature optimization”.

As for the ci.dll, the only issue is that Microsoft did not post any documentation about this, hence why people say “not supported”. One developer can simply open ci.dll in IDA and the .pdb is available from Microsoft; making reversing / exporting of these functions quite easy.

This was already reverse engineered: https://github.com/Ido-Moshe-Github/CiDllDemo
The time spent on reverse engineering these functions is not a consideration… Reverse engineering is something you do once, compare it to supporting millions of customer machines for years and having complicated code in your product.

Once you created your import .libs, they won’t change. There are many Windows editions released that heavily rely on these functions and Microsoft 100% will not re-invent the wheel and change them.

Why are you so sure about it? MSFT already changed this specific function in the past, so how can you be sure?

Also, it’s not only whether “it’s going to change or not” - there are caveats to using undocumented APIs… The simplest example is using APC - How are you handling the unload of your driver safely? Are you aware of the synchronization and locking issues?

1 Like

simplest example is using APC

LOL… that is a GREAT example. There was a client who wanted us to help them with a driver… this was YEARS ago. They had some bugs. It use Kernel APCs all over the place. We politely declined.

Peter

@0xrepnz said:
I did not say that using any undocumented API is wrong… It’s always a trade-off that you need to think about. If there’s a benefit to using the undocumented API, You have to “evaluate the risk” and decide if it’s worth it, as Peter said. Evaluating the risk can be very hard at times and requires YEARS of experience with windows kernel and knowledge about potential issues and edge cases. We just suggested that in this specific case it’s not worth it, You can read my previous comments to understand why it’s not worth it… In my opinion.

OK; we are jumping from one thing to another. Of course you have to evaluate risk, and of course you need years of experience, but it is well worth it for many reasons. The thread asked for signature checking in Kernel… so I have a potential solution. How is this solution taken is up to the OP.

@0xrepnz said:
1 - the “can be intercepted easily” argument is just plain incorrect. Moreover, even if it would be correct, why do you care?
2 - “hate Win32 API” - well, nobody “loves” it… but is it a reason to take the risk?
3 - “they are slow” - Design decisions that are based on “runtime performance” reasons, should be backed by tests. If you perform a test and
observe a performance overhead that is caused by a Win32 API (And, this issue is solved by switching to a undocumented API) this may be a good
reason to use this undocumented API, after evaluating the risk. Simply rejecting ALL the Win32 API because it’s “slow” is not a good reason in my
opinion - this is called “premature optimization”.

1 - you must be joking? “can be intercepted easily” = just put a detour on CreateFile in your current process when you read your license file, LOL.
2 - Yes. Many reasons, too many actually. I`m not going to start listing them, you need to do your own research and see why.
3 - All WinAPI’s are slow, tests have been done, and this is a fact. It is much faster to just call Nt* functions directly from ntdll.dll than using for example Kernel32.dll functions. I won’t go into system calls, which are just amazing in performance and security.

The reality is that Microsoft had to create another layer of functions which are easy to use for newbie developers. I just do not see another reason as these are not cross platform, and they call the same functions from ntdll.dll for decades, and still will.

Example:

HANDLE hFile = CreateFileW(L"banana.txt", GENERIC_WRITE, 0, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);

Much easier than:

HANDLE hFile;
static WCHAR const String [] = L"banana.txt";
static UNICODE_STRING const UnicodeString = RTL_CONSTANT_STRING (String);

OBJECT_ATTRIBUTES obj{};
InitializeObjectAttributes(&obj, &str,  OBJ_CASE_INSENSITIVE, NULL, NULL);

IO_STATUS_BLOCK isb{};

NTSTATUS status = NtCreateFile(&hFile, FILE_GENERIC_WRITE, &obj, &isb, 0, FILE_ATTRIBUTE_NORMAL, FILE_SHARE_WRITE, FILE_OPEN_IF, FILE_RANDOM_ACCESS|FILE_NON_DIRECTORY_FILE|FILE_SYNCHRONOUS_IO_NONALERT, NULL, 0);

Ya dig?

@0xrepnz said:
This was already reverse engineered: https://github.com/Ido-Moshe-Github/CiDllDemo
The time spent on reverse engineering these functions is not a consideration… Reverse engineering is something you do once, compare it to supporting millions of customer > machines for years and having complicated code in your product.

Glad you Googled and started to understand “hidden” functions used by Microsoft in your Windows for years. However regarding reversing, it is well worth it if you cannot achieve your goal another way? Also, you know that you can create a pattern and find the same function in next releases right?

@0xrepnz said:
Why are you so sure about it? MSFT already changed this specific function in the past, so how can you be sure?

Because I tested? I`m not running my mouth for no reason, I made tests from Win 7 to Win 10. For me it worked completely fine, and I have been only using official images.

@0xrepnz said:
Also, it’s not only whether “it’s going to change or not” - there are caveats to using undocumented APIs… The simplest example is using APC - How are you handling the unload of your driver safely? Are you aware of the synchronization and locking issues?

OK, APC. Perhaps open IDA and see how the function works before making assumptions? From what I saw you just Googled about this and found a repo about it, so how can you be so sure about APC? Regardless, if it doesn’t fit in your “bucket”, just use it in a different matter? Be creative? I mean.

Anyway, I came here with good intentions to help the OP; I`m not going to argue about what’s wrong and what’s good; I will leave that for someone else :slight_smile:

first, RtlGetVersion is documented
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-rtlgetversion

second, deciding that all win32 APIs are slow is ridiculous. Yes Microsoft has added compatibility shims innumerable, but what would you suggest instead? the C# managed wrapper classes around the win32 APIs?

and then comes judgement. Not just any judgement, but engineering judgement. Can Microsoft change it? If something is implemented as a macro in a header file the answer is, well no - existing binaries would then fail the ABI. If it is implemented as a function call, then is there a possibility of a different implementation?

certificate checking is a relatively young species of technology and at least two generations of internal functions that deal with these crypto algorithms have been obsoleted already. And we are constantly hearing about quantum algorithms that will outmode the current ones completely so my confidence that this part of a stable API surface is low. I strongly suspect that this will change again. I have no problem with the use of OVERLAPPED.Internal as the NTSTATUS as that has been stable for around 25 years and there is real alternative possible

if you are writing test code or learning, sure. if you are writing code that is just for yourself or your own company, sure. if you are writing code that has broader implications, think and make ethical judgements

@MBond2 said:
first, RtlGetVersion is documented
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-rtlgetversion

second, deciding that all win32 APIs are slow is ridiculous. Yes Microsoft has added compatibility shims innumerable, but what would you suggest instead? the C# managed wrapper classes around the win32 APIs?

and then comes judgement. Not just any judgement, but engineering judgement. Can Microsoft change it? If something is implemented as a macro in a header file the answer is, well no - existing binaries would then fail the ABI. If it is implemented as a function call, then is there a possibility of a different implementation?

certificate checking is a relatively young species of technology and at least two generations of internal functions that deal with these crypto algorithms have been obsoleted already. And we are constantly hearing about quantum algorithms that will outmode the current ones completely so my confidence that this part of a stable API surface is low. I strongly suspect that this will change again. I have no problem with the use of OVERLAPPED.Internal as the NTSTATUS as that has been stable for around 25 years and there is real alternative possible

if you are writing test code or learning, sure. if you are writing code that is just for yourself or your own company, sure. if you are writing code that has broader implications, think and make ethical judgements

Well, if that’s what you understood from my comment; I`l just shut up and mind my own business.