Evaluating performance of minifilter

Dear fellow developers,

Over the past months, I have written a minifilter which registers some IRP and some system hooks such as _PsSetCreateProcessNotifyRoutine_and ObCallbacks. I would like to evaluate/benchmark the potential loss of time that the driver may have introduced one installed on a system (due to the synchronous code executed, maybe bad coding practices, etc.). Is there a standard approach to evaluate the performance of a minifilter ? My intuitive idea was to perform a lot of file system interactions (e.g., opening/reading/writing/closing files) and processes interactions with and without the driver and compare the results, but I’m not sure if the signal-to-noise ratio would be meaningful enough.

Any thoughts ?

Thanks !

There is no easy answer. Ideally you’d have a series of workloads that mimic the real systems you’re going to run on and could take measurements based on those. With a filter often the answer to “which workloads do you want to support?” is “all of them”, which isn’t very helpful…

IOzone provides a utility called fileop that does a decent job of generating and measuring a bunch of file system operations. Sounds like you have more than just a file system filter, so you could extend this idea to measure HANDLE/process creation (it comes with source also so that saves you from having to start from scratch). By no means is it perfect and just because you have good fileop numbers doesn’t mean you’re in the clear, but it’s an easy to capture data point.

If you want to look more at your impact on the system you might want to look into trying standard PC benchmarks with and without your software (e.g. something like PCMark). This can give you a good feel on the application level impact of your software, but the downside is that if you lower the score you really have NO idea what the hell the tests are doing (which is why we’re not big on doing this here at OSR).

Also, don’t underestimate dogfooding as a way to get a feel of the performance of your filter. You could nail every performance metric under the sun and it won’t matter much to the user if drag and drop in Explorer is suddenly 50% slower. Sometimes nothing beats just using a system with your software on it to get a feel for its real world impact.

1 Like

Hey Scott ! Many thanks for your great explanation on the status of drivers benchmarking. I decided to write a little application which performs a lot of CreateFile/ReadFile/CloseFile and got very interesting results. I’ll probably to do the same for process interactions/creations as well. :slight_smile:

Cheers !