Increase Minimum Timer Resolution

Hello, I’d like to mention that I’m asking this question out of curiosity and am not a driver developer.
I’ve been searching through these forums and found an instance where a user was able to increase the minimum timer resolution from the usual .5ms to .001ms. How could I achieve such a minimum or something more reasonable that’s still greater than .5ms?

Thanks.

On Apr 23, 2019, at 2:30 PM, Calypto wrote:
>
> I’ve been searching through these forums and found an instance where a user was able to increase the minimum timer resolution from the usual .5ms to .001ms. How could I achieve such a minimum or something more reasonable that’s still greater than .5ms?

I think you misread that. The default timer resolution is about 15ms. It is possible through timeBeginPeriod to set the resolution as low as 1ms, but not below that.

Tim Roberts, timr@probo.com
Providenza & Boekelheide, Inc.

Thank you for your response, Tim. I should’ve posted the link to the comment in the original post, that’s my fault. https://community.osr.com/discussion/comment/208380/#Comment_208380 Correct me if I’m wrong, but that’s a 1 microsecond timer resolution, is it not?

I think I had my logic mixed up. I meant increase maximum resolution, or decrease the interval. My apologies for the misunderstanding.

Calypto wrote:

OSR https://community.osr.com/
Calypto started a new discussion: Increase Minimum Timer Resolution

Hello, I’d like to mention that I’m asking this question out of curiosity and am not a driver developer.

I’ve been searching through these forums and found an instance where a user was able to increase the minimum timer resolution from the usual .5ms to .001ms. How could I achieve such a minimum or something more reasonable that’s still greater than .5ms?

IMO this sounds like you are talking about the legacy function

KeQuerySystemTime, which has a best precision of 0.5 ms
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-kequerysystemtime~r1

and the new function
KeQuerySystemTimePrecise, which provides 100 ns precision
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-kequerysystemtimeprecise

which is supported in Windows 8 (IIRC) and later Windows versions.

The corresponding functions for user space applications are

GetSystemTimeAsFileTime
GetSystemTimePreciseAsFileTime

Martin

I’ve been searching through these forums and found an instance where a user was able to increase the minimum timer resolution from the usual .5ms to .001ms.

It’s impossible on most HALs.
P.S. Scheduling with 100000Hz rate isn’t smart idea for general purpose OS on general purpose CPU. It literally kills any power savings.

P.S. Scheduling with 100000Hz rate isn’t smart idea for general purpose OS on general purpose CPU.

Actually, I would go a bit further than that, and say that it is simply infeasible even in theory…

It literally kills any power savings.

Well, it has absolutely nothing to do with the power savings…

Let’s do a bit of math. 001ms is one nanosecond, i.e. a clock cycle of 1GHz CPU. Making a system timer operate at such frequency implies getting a timer interrupt every nanosecond. Even if we assume that we have some incredibly powerful RISC -based CPU (i.e. the one that does not execute a single instruction for more than 1 clock cycle) that is capable of processing multiple instructions in parallel (i.e has IA64-like instructions set) and is capable of running at some incredibly high rate of , say, 10 -15 GHz, we would still get literally into an interrupt storm if our CPU gets interrupted every nanosecond - at least a dozen of new timer interrupts will arrive while the CPU processes the current one. Now let’s get back to the Earth, and recall that, in actuality, the best that is potentially available to us is just a x86-64-based Xeon…

As you can see it yourself, such a timer if well beyond the existing CPU capabilities…

Anton Bassov

0.001ms is one nanosecond, i.e. a clock cycle of 1GHz CPU.
You slightly wrong in math. It is not nanosecond, it’s microsecond (0.001 of millisecond).

You slightly wrong in math. It is not nanosecond, it’s microsecond (0.001 of millisecond).

OMG…

You call it “slightly”??? Look- I am wrong by the factor of 1000(!!!)…

Let’s see how it all goes and how much well-deserved derision I will get for it - after all, this gaffe is already comparable to the typo of “hummer” instead of “hammer” (although very obviously is not as funny as above mentioned one, at least for someone who happens to live in the US) that I made on NTDEV around 12 years ago. What makes things much worse for me is that this one was not a typo but a result of “calculation”…

Anton Bassov

@Martin_Burnicki said:
The corresponding functions for user space applications are

GetSystemTimeAsFileTime
GetSystemTimePreciseAsFileTime

Martin
Yes, I was indeed talking about the legacy function, thanks. I’ve read through the documentation for KeQuerySystemTimePrecise and correct me if I’m wrong, but this does not actually change the interrupt frequency of the timer, but only returns a value. Are there any functions that could set the resolution to something even greater than 1000 (or 500) microseconds, to say, 250us?
@SweetLow said:
It’s impossible on most HALs.
P.S. Scheduling with 100000Hz rate isn’t smart idea for general purpose OS on general purpose CPU. It literally kills any power savings.
I’m curious as to how Daniel Terhell achieved the custom timer resolution. Are you saying a custom HAL would be required to do this? Also, I understand your concerns for power efficiency. However, I’m not worried about power efficiency or throughput.

As I mentioned in the original post, I agree 1 microsecond is way too short. I’m merely interested in testing something lower than 500us. Thanks to everyone that replied.

I’m curious as to how Daniel Terhell achieved the custom timer resolution

In that thread, he clearly said he hacked it in. So, my guess would be be… at least partially… by using the debugger.

Mr Terrell definitely knows his way around.

Peter

Mr Terrell definitely knows his way around.
The first question - on what hardware and OS version? I know as minimum as 4 hardware timer sources and one of them has only 8192Hz limit…

by using the debugger
Yes, this is not high problem to change timer rate [on some hardware], but yes, you have to say to all other system that timer rate is changed.

I’ve been looking through the MS docs and found these two pages:
https://docs.microsoft.com/en-us/windows-hardware/drivers/kernel/high-resolution-timers
https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-exsettimer

Perhaps ExSetTimer would work since it uses 100ns time units?

@Calypto said:
Perhaps ExSetTimer would work since it uses 100ns time units?
Look at the whole picture:

ExSetTimer
A periodic timer can expire no more than once per system clock tick. Setting the period of a timer to a value smaller than the interval between system clock ticks does not cause the timer to expire more than once per system clock tick, but might cause the intervals between successive expirations to vary if the system clock rate changes.

ExSetTimerResolution
The minimum value is approximately 10,000 (1 millisecond) but can vary slightly by platform.
:wink:

@SweetLow said:
ExSetTimerResolution
The minimum value is approximately 10,000 (1 millisecond) but can vary slightly by platform.
:wink:
In that case I’m defeated. Perhaps I can get in touch with Mr. Terhell.