So, I’ve got a scenario where I’d like to use KeQueryPerformanceCounter, but the documentation looks to be pulling some scare tactics. What’s the real story here?
“Depending on the platform, KeQueryPerformanceCounter can disable system-wide interrupts for a minimal interval.”
How can I determine if my platform may be a victim? And how long is this minimal interval?
Here’s why I want to use KeQueryPerformanceCounter…
In hardware I have a decrementing counter, that decrements at a variable frequency. The goal is to have it reach 0 exactly on the second. So, I can tune the frequency to make these adjustments. I have also provided myself with a a free running counter that increments at the current frequency. What I am going to do is periodically (say with a 1 second timer) take a sample from this free running counter and take a KeQueryPerformanceCounter sample. I’ll then find the difference in time (say in microseconds) from the last sample set and determine how much my variable frequency needs adjusted to align with the system clock.
Is calling KeQueryPerformanceCounter periodically on the magnitude of seconds going to degrade my I/O performance and/or whole system as the documentation warns? Should I be approaching this problem in a different manner? For my linux driver I can retrieve microsecond values from the current system time. It appears the best I can do with Windows is millisecond resolution, but even those values only get updated once every 10ms. To me it seems KeQueryPerformanceCounter is my only choice.