My software installs a little minifilter driver, which works fine. However, I'm worried that a hacker might:
Hack my driver.
Load the hacked driver.
While the hacked driver is loaded, delete the hacked driver file and replace it with my original file.
Then start my application.
In this scenario, is there any way for my user mode application to detect whether or not the loaded/running driver is the original unhacked driver?
One simple solution I could think of would be to call some Microsoft API like "GetLoadedDriverSignatureInfo(LPSTR driverName)", which would return the serial number of the code signing certificate the loaded driver was signed with. If this serial number matches my certificate, that would indicate that the driver is likely the original untouched driver and was not hacked. But I don't assume such an API exists, right?
(Important: Of course I can verify the signature of the driver file on SSD, but that doesn't really help because the hacker might have restored my original driver file, after having loaded his hacked version of the driver.)
The purpose of my question is that I'm trying to improve my software's license/copy protection. So when talking about a "hacker", I mean someone trying to break my software's licensing system. I'm not talking about "hacking" in such a way as somebody trying to hack into my PC.
Basically, I want all the parts of my software (driver, service and application) to be able to verify if the other parts were hacked/modified or not. So for example, if my driver can detect that my service was hacked, my driver would refuse to even start. Or if my application detects that my driver was hacked, my application would refuse to even start.
If I manage to implement it like that, all 3 parts of my software would (sort of) protect each other, which means the hacker would have to hack all 3 parts at the same time in order to remove the license protection code, which should make things harder for him.
Of course I'm aware that a perfect protection is impossible, I just want to make it as hard as I can.
To load the hacked driver it would have to be properly signed or the user would have to turn on test signing using bcdedit. Both seem unlikely. Also once loaded the file is in use and Windows is going to make it very difficult to overwrite it.
Well, if it's a sophisticated hacker he might have a code signing certificate of his own. Which is what I'm kinda most scared of. That's why I'd love to be able to somehow verify which certificate ID the loaded/running driver was signed with.
From what I remember, once a driver is loaded, the file can be renamed or even deleted. Or am I misremembering that?
FWIW, I do plan to check if test signing is activated and my software will not run in that mode, just to be safe.
Enforcing licenses is a broad and complex topic. Reliable solutions use 'phone home' or hardware dongles, but even they don't protect against determined attack - even with standard debuggers etc. And there are specialized tools too
There are a lot of other 'good enough' solutions that protect against different levels of sophistication. The best place to start is to think about what exactly you want to defend against instead of just the vague 'hacking'. And then from that, it will likely become more obvious what kind of controls are really appropriate
He would have a stolen one maybe which is difficult with EV certificates. Also it will get revoked when he uses it for something illegal and gets caught. If he has a legit one of his own it will point right back to him. It's not easy to get a code signing certificate.
@ MBond2, I have a whole sophisticated protection logic planned out, which includes some of my own ideas, including a minifilter driver, and some 3rd party hacking protection products, as well. Of course it's all not perfect, but I'm trying to make it as good as I possibly can.
@ GrimBeaver, you make some good points there. Probably just checking whether test signing is enabled is already a good solution and will make life hard for the average hacker.
I've been told, though, that there is a professional group in China which is hacking my product and I fear they may have a legit EV certificate. They don't spread their hacks around, but keep it private with their customers, so I don't have access to the hack, and probably won't ever know which EV certificate they might use. So I'd still love to find a way to verify the certificate ID of the loaded driver to confirm it's mine. But I fear it might not be technically possible...
Code signing certs are no good for drivers anymore. They have to be signed by MSFT.
The major problem currently is legitimate drivers that have available exploits. The code itself is not altered, it doesn't have to be. See 'bring your own vulnerable driver'.
If your product can function without the need for specific hardware or submitting requests over the network, then you are basically SOL when it comes to a sophisticated and determined attack.
Anything that you can do can be undermined or just undone by the hackers. And it can be as simple as modifying your binary to elide the license check, or more subtle like distorting certificate checks (the OS can be modified too). You can make it harder. Maybe hard enough that it isn't worth the effort, but you can't make it impossible.
In this case the use of KM components makes only a small difference to overall security. Code obfuscation if probably more effective. But if they already have a version of your software that they can run illegally, all your effort now may be for naught
Mostly agreed. But instead of just giving up, I'd still like to make it as hard as possible to crack it, in the hope that the hacker will give up because it's too much effort to crack. And yes, part of my new protection solution involves code obfuscation and virtualization, among many other things.
And IMHO one really nice extra protection layer would be if all parts (drivers and exes) of my software can cross check each other to verify if any modifications were done to any part, because then the hacker can only make progress if he cracks all parts of my software at the same time.
I'm actually running multiple exe files now, each protected with a different protection product (e.g. Themida), and all exe files work together to protect each other. So the hacker doesn't just have to break one protection product, but has to crack them all. And I want to prevent him from being able to crack one at a time. So they all check each other for modifications.
The one missing piece of the puzzle is that I want to figure out how to check if the loaded driver was modified or not. Hence this thread.
Think about what you are trying to do. How does any software determine if some piece of data (another executable for anything else) is 'correct'?
The code itself cannot know what the truly correct bytes in that data should be. All that any computer program can do is compare values that exist to some sort of reference source. A direct comparison, or some sort of indirect comparison.
Direct comparisons are sort of counter productive because if you have the actual values from another source, then why bother to validate the copy that you have at all?
The first sort of indirect comparison is a hash. Usually a CRC or similar, these are comparisons between the data and itself primarily designed to check for errors caused by bad hardware - bit rot etc. And while effective for that purpose, because hardware errors are presumed random, they don't provide security because a human can plan to make two or more changes at once
The next sort of indirect comparison is a 'digital signature'. This is also a way of comparing the data to itself, but in a more complex way, and with an external reference (PKI). While it is not impossible for a binary to be modified in a way that does not invalidate it's digital signature, the difficulty is much greater than a CRC style hash
But then the question is how are any of these things checked? You can neither write your own code to do the validation, nor rely on OS provided code to do that validation without leaving yourself open to the exact same problem - the validation code is easier to attack than the hash / signature itself. Especially if the attacker is running the OS under a debugger. WinDbg interacts with the OS, but hardware level debuggers don't - and that's much easier these days with the proliferation of virtual machines and open source hypervisors.
Determined attacker will have both given themselves authority (admin access) and bypassed security elements (certificate checks) in the OS that they load your software into.
And then the question is why would they bother to hack your new version, if they already have a hacked version
I mostly agree with you. I know I will not achieve 100% protection. The plan is simply to make it as time consuming and annoying as possible for the hacker to crack the new version. Maybe if I make it complicated enough, they might give up (I can dream).
So I'm conjuring up a big number of independent protection layers. One of them is that each exe/driver will cross check all the others to check if they were modified. Yes, in theory it's easy enough to remove these checks, but each exe/driver is also protected with some (different) professional protection software. So they need to break the professional protection system first, for each file. And they have to break every one of them, because if they only modify one of my files, the other files will notice the modification and refuse to run. So the hacker can't debug anything if he modifies only one of the files.
Another layer of protection is that the new version of my software will need the driver, and the OS usually requires it to be EV signed. So any crack may require putting the OS into test signing mode, if the hacker doesn't have a legit EV certificate. That's no problem for the hacker himself, but at the very least it's an inconvenience to his users.
Another protection layer is that my software will produce bad results if the hardware properties of the computer don't match the license key. This is a type of protection that is harder to figure out because the hacker really has to understand how the software works in order to find the code location which is responsible for producing bad vs good results.
So I hope if I just add enough additional layers of protection, I might be successful in making the hacker give up, because it's not worth the amount of time needed to crack all the various protection layers.
What's the alternative, anyway? Just giving up and doing no protection at all? That doesn't really make sense to me.
The reason why bother to hack the new version is that I'm constantly adding new features to my software. So if they don't crack new versions, their users will be unhappy because they're missing out on the various new features that are regularly added.
As counter intuitive as it may seem, the no protection option is where all of the major vendors have landed after a couple of decades of trying
Unless your software it tied to specific hardware, or it needs network services that you control, the best that you can do is to make it hard. And code obfuscation is probably more effective than anything else. Any everything else that you do, including the 'professional' tools is mostly a waste of time. Making it harder for legitimate users to comply without seriously deterring anyone but the most casual attacker
Don't get me started on the futility of those 'professional' tools. The techniques to defeat them are not appropriate subject matter for this forum, but even a wikipeada level search will find them
Yeah. People who buy software and people who use cracks are two different groups of people. There is some overlap, but it's not worth the effort. Some basic protection is enough.
Thank you all for contributing to this topic, but could we please move on from philosophical discussions to the original technical question I asked?
Considering the lack of technical answers, I suppose there's no way whatsoever to check which certificate a loaded/running driver was signed with?
E.g. one idea would be for my user mode service (which has admin rights) to read out the memory pages belonging to the loaded driver, so I could double check the PE image of the loaded/running driver in RAM. Any chance that is possible somehow?
Anybody have any added insight into or ideas how to solve my original technical question?
Not with your concerns, no. You can go look at the file, but the certificate section of the binary is in a discardable section that does not remain in memory after it is loaded.
And the philosophical discussions are way more valuable than you seem to think. A lot of companies have spent a lot of dollars trying to protect their IP, and the people here have legitimate, real world experience with that. You can literally spend as much money as you want to, but the principle of diminishing returns genuinely applies. The cost/benefit graph has a very high slope. The basic checking prevents 99.99% of hackery, but you may never be able to spend enough money to foil that one Chinese hacker.
" I suppose there's no way whatsoever to check which certificate a loaded/running driver was signed with?"
Of course there is. See for example the output from signtool verify -v
Mark, he's concerned about the pathological scenario where the hacker renames the original driver and substitutes his own, then loads the driver, then renames the original driver back into place. That CAN be done, even with loaded sections.