At kernel-mode, how can I accurately -- at with at least as much
resolution -- calculate the same value the user-mode function timeGetTime()
would return? This function returns the number of milliseconds since the
KeQuerySystemTime() isn't any good because its not accurate enough (plus I
don't know how to find out what time the system booted anyway --
KeQuerySystemTime() returns a time relative to the year 1601).
KeQueryInterruptTime/TickCount() with KeQueryTimeIncrement() don't *seem*
any good because apparently (according to page 311 of Dekker & Newcomer) the
result of KeQueryTimeInterval() can vary with user-mode calls to
timeBegin/EndPeriod(). Typically, the interval will be something like 10
ms, but timeBeginPeriod(1) could change it 1 ms, for example, throwing off
any calculations done with.
And, as I understand it, this would seem to imply that the comment for
KeQueryTickCount() that appears in the Windows 2000 DDK, "To determine the
absolute elapsed time multiply the returned TickCount by the
KeQueryTimeIncrement", is wrong. What's the deal?
In any case, is the following (first stab) function actualy correct?
If so, then I can just code the function I need like so, eh?
return KeQueryInterruptTime() / 10000;
But, what if user-mode code calls timeBeginPeriod(1) to reduce the interval,
etc.? Does the value returned by KeQueryInterruptTime() increment at a
higher rate then?
Or, is the following function correct?
return KeQueryTickCount() * KeQueryTimeInterval();
And again, what if user-mode code calls timeBeginPeriod(1)? Will
KeQueryTimeInterval() start returning smaller values?