Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results
The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.
Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/
This question consists of two parts:
1. How to technically restrict access to the driver to certain processes. I've read
SeAccessCheck are a good way to start.
2. How to verify the calling process.
Here I'm assuming that randomizing CTL_CODE's function argument is not the way.
Assuming both the driver and the client are signed, is it a good idea to verify the signature of the calling process from the driver and completely rely on that? Or is there any other solutions?
What if both are not signed, does that make it impossible to verify?
I guess this question is not necessarily about IOCTL and socket communication but more about how to verify a process is the one trusted by the driver and is the only one implemented by the driver's developers that is meant to, for example, communicate with the driver.
|Upcoming OSR Seminars|
|OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!|
|Kernel Debugging||30 January 2023||Live, Online|
|Developing Minifilters||20 March 2023||Live, Online|
|Internals & Software Drivers||17 April 2023||Live, Online|
|Writing WDF Drivers||22 May 2023||Live, Online|
Security in windows is based on 'security principals' aka SIDS and you should work within that framework. Cert validation is the wrong approach. You lock down your access device object restricting it to specific SIDs using access control lists. Start here: https://learn.microsoft.com/en-us/windows-hardware/drivers/driversecurity/windows-security-model
Now I understand a bit more about security policies in Windows but I still do not understand how this prevents other programs from communicating with the driver.
This specifies exactly who can communicate with the driver, right?
Does this mean any process that has this SID can communicate with the driver? But can't anyone find the string inside the driver and make a program with the same SID to communicate with the driver?
I also still don't understand what guid is and where to get it from.
There are very few examples online on securing the communication between a driver and a client, unfortunately.
In order to get that SID, the process has to login. You can't just make yourself any SID you want.
Tim Roberts, [email protected]
Providenza & Boekelheide, Inc.
Does that mean I need to make my client process "login" (as unique? user x? with login credentials?) in order for it to communicate with the driver?
I don't quite understand what login means in this context, could provide a link?
The part I also still don't understand is how does this stop reverse-engineers from emulating exactly what the client, whatever that is, in order to also communicate with the driver.
Is the topic of securing client/driver communication not that important, hence the lack of resources?
Typically the restriction is to a group sid, not a user sid, and you restrict access to that group, typically using various AD tools. A much simpler approach is to (over) use the Admin group. Now you just restrict access to anyone in the admin group and you can set your program to run as admin, and the user has to have rights to run as admin and has to ack the elevation at runtime. If you for some reason don't want to use the admin approach then AD groups are the way to go.
At the risk of being disrespectful, you also need to stand back and ask yourself honestly how much protection is really warranted. Security is a sliding scale -- you can spend as much money as you have. It doesn't matter what you come up with, a dedicated hacker can get through it. You have to weigh the cost of your protection solution against the likelihood and cost of a violation. Is anyone REALLY going to try to use your driver directly? The typical user won't even try, of course. And if there was a violation, what would be the cost?
Tim Roberts, [email protected]
Providenza & Boekelheide, Inc.
Or, you want to restrict the access to your device/driver to a given SERVICE and you run that service with a unique SERVICE SID.
You should really think carefully before you put a whole bunch of policy in kernel-mode. In general, architecturally, this is not where policy like this belongs.
I understand that completely but this question actually has nothing to do with a real life project. The question is just for me to learn how to do it, simply because I found it interesting. I thought there was a standard way to only allow the program the was created by the driver developer to communicate with it. This does not seem to be the case, however.
Now I have my services window open and am wondering how many of these drivers with an attack like this in mind. If every process could (theoretically) communicate with every driver, that would be a problem. We have patch guard along with driver signature enforcement, I thought this would provide some level of integrity to make a standard like this possible.
I may have failed to make it clear, but in the question I meant to assume a 100% integrity of the kernel space, I'm sure this must be possible, if so.
I will look into this.
But every process can't as the security model for windows is based on users and groups and devices are generally restricted to the admin group. Step one is to lock down admin access.
I think now I have a better understanding or maybe not.
So the solutions here will make it difficult for a random process on a random machine to communicate with a random driver, or even with a driver that is brought in with the process (as is the case with some malware). But this will not prevent someone from willingly giving their own program privileges (or whatever it takes) to communicate with a driver, and this is that part that is impossible to 100% prevent even with the assumption of 100% kernel integrity, because the user could simply emulate/duplicate what the legitimate process does to communicate with the driver, right? If so, then why not use certificates, since you can't fake them, meaning if the driver's code is assumed to be 100% secure, then problem solved.
If you insist on giving users on a system admin rights then sure, there is no way to secure that system. So don't do that. You've already decided that you are going to verify your process by using a cert, so there doesn't seem to be much point in trying to persuade you otherwise. However as you have given your users admin rights I'm going to exploit that by using detours style interception to insert my malware to f up your driver into your running special cert protected process.
But you can certainly copy them. What does the phrase "use certificates" mean to you? Are you going to pass a certificate through an ioctl? That's easily copied. Are you going to require the executable to be digitally signed, and check the checksum at runtime? Could be done, but it's a pain, since you have to update the kernel driver every time the app changes.
It's not a trivial problem, by any means.
Tim Roberts, [email protected]
Providenza & Boekelheide, Inc.
I'm absolutely not going to do that. I'm arguing for it merely because I couldn't understand how security policies could prevent replication attacks by the user/admin themselves. Now I understand it is not as bad if only someone who is specifically targeting my driver could do it. Because if someone could find a way to communicate with the driver on their machine and turn it into a harm driver, that is fine as long as they can't do it on any machine.
This is exactly what I meant. Isn't that how driver signature enforcement does it? This was my first thought.
You're completely right, I oversaw that.
This also sounds interesting. I've never written a service for Window, I'll definitely try that as well, there is never harm in learning more.
Ok, so you validated that your process is correctly signed. However I've put a filter driver in your device stack and I'm going to hijack your process context to send my malware into your driver.
I was trying to understand, if someone was able to communicate with my driver on their own machine (given they can do whatever it takes on that machine i.e. they can disable DSE, enable debug mode, attaching kernel debuggers, bytepatch the kernel etc.), would they then be able to do the same on random people's machines with all the aforementioned default Windows restrictions and limitations (forget anti-viruses).
If the only way for you to attack a my driver is to be able to violate the Windows' kernel integrity, then I'm completely in the clear, since you may easily do it on your machine but it is not an easy task to do on random people's PCs who (in this hypothetical scenario) use my driver. It's not my duty to prevent kernel attacks anyway, unless I was working on a AV, which is probably not the case.
But it isn't. As I noted earlier. I can hijack your process by code injection in user mode.
Not if the process is protected by the kernel driver on initialization.
The "if the process is protected" is the tricky part. How do you know which process to protect?
Process hollowing/doppleganging/herpaderping/ghosting are hard to deal with (see https://www.microsoft.com/en-us/security/blog/2022/06/30/using-process-creation-properties-to-catch-evasion-techniques/). And, even better, if I fake you out into thinking that my process is your "special process" then you'll even protect me from being tampered with by anything else 😂
Note that a Windows solution for this is having a PPL. It attacks the problem a little differently by ensuring you can create a service that only runs code signed by you or Microsoft. Combine that with a Service SID and you're pretty hardened.
even assuming that your special process is perfectly protected during initialization, all UM VA includes data that does not exactly match the disk image - and it is infeasible to 'protect' all of the loaded DLLs and relocation information. So yes, I can hijack your special process in UM after you have verified the image. And I can do it not just on my machine with a debugger hooked up, but on any other machine that has my software installed.
The short version is to use the security model implemented by the OS
I'm still arguing for a solution that I will not end up using just to learn what is wrong with it.
Good luck finding a preimage for my signed sections.
This doesn't apply to all sections. Specifically, the .text section is mapped as is. I know you could probably find a way around it by adding sections etc. but I wonder if the Windows security model (without the PPL which costs money) prevents all the mentioned attacks!
That's the approach I'm going to take. I just still don't understand why the service SID is tied to its name according to this https://pcsxcetrasupport3.wordpress.com/2013/09/08/how-do-you-get-a-service-sid-from-a-service-name/
Can't any service start with any name it wants? How does the driver distinguish?
services can't start with arbitrary names. Services can only start with names that have been preconfigured in the registry and the parameters passed to StartServiceCtrlDispatcher must match with those that the SCM expects when it launched the process or send a control to a shared process service. The SIDs derive from the names because they must be unique and must be installed by an admin
If I have access to your software, and a debugger, I will surely be able to find a match to your 'super special' section in minutes - a few hours at the most. At worst, I just use your whole image, but the start execution somewhere else that never calls your code. you can't protect against that kind of attack without also making your software unusable after a Windows update