overhead of Starting and Stopping a timer

I have a WDF driver that submits requests to the hardware (actually Xen
in this case) and I want a timer to fire after a few seconds to detect
if the request is taking too long. The request should complete in a few
ms and there could be anywhere from 20-100 requests per second.

What is the overhead if I start the timer every time a request is
submitted (eg pushing out the timer due time), and stop the timer if the
hardware queue becomes empty? If the overhead is negligible then that’s
fine, but if it isn’t I will stick with making a timer fire every second
and just detect when the timer has fired a few times in a row without
the hardware completing any outstanding requests. The ‘few seconds’
figure doesn’t have to be precise - an error of +/- 1 second is fine as
it’s way past the time we’d expect the request to have completed anyway.

Thanks

James

Check the Steps.

  1. KTIMER RwTimer;
  2. LARGE_INTEGER CancelDpcInterval;
  3. KDPC RwDpc;
  4. KeInitializeTimer(&PtrDeviceExtension->RwTimer);
  5. KeSetTimer(&PtrDeviceExtension->RwTimer, CancelDpcInterval, &PtrDeviceExtension->RwDpc);
  6. Call Cancel when the operation completed. KeCancelTimer( &PtrDeviceExtension->RwTimer );
    Once the Completion Routine call you need not to run the Timer, So call KeCancelTimer();
    If the Queue becomes empty call KeCancelTimer();

> Check the Steps.

  1. KTIMER RwTimer;
  2. LARGE_INTEGER CancelDpcInterval;
  3. KDPC RwDpc;
  4. KeInitializeTimer(&PtrDeviceExtension->RwTimer);
  5. KeSetTimer(&PtrDeviceExtension->RwTimer, CancelDpcInterval,
    &PtrDeviceExtension->RwDpc);
  6. Call Cancel when the operation completed. KeCancelTimer(
    &PtrDeviceExtension->RwTimer );
    Once the Completion Routine call you need not to run the Timer, So
    call
    KeCancelTimer();
    If the Queue becomes empty call KeCancelTimer();

Yes that much is understood. What is the overhead of step 5 and 6? Is it
a cheap or an expensive operation when done 20-100 times per second?

James

What Device driver? More detail need about the problem.

Did you consider a periodic timer, where you start it once and never stop it till the system does an orderly shutdown?

Gary G. Little

----- Original Message -----
From: “James Harper”
To: “Windows System Software Devs Interest List”
Sent: Tuesday, January 11, 2011 9:32:17 PM
Subject: [ntdev] overhead of Starting and Stopping a timer

I have a WDF driver that submits requests to the hardware (actually Xen
in this case) and I want a timer to fire after a few seconds to detect
if the request is taking too long. The request should complete in a few
ms and there could be anywhere from 20-100 requests per second.

What is the overhead if I start the timer every time a request is
submitted (eg pushing out the timer due time), and stop the timer if the
hardware queue becomes empty? If the overhead is negligible then that’s
fine, but if it isn’t I will stick with making a timer fire every second
and just detect when the timer has fired a few times in a row without
the hardware completing any outstanding requests. The ‘few seconds’
figure doesn’t have to be precise - an error of +/- 1 second is fine as
it’s way past the time we’d expect the request to have completed anyway.

Thanks

James


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

For me, it looks like KeSetTimer is nearly the same overhead as KeSetEvent or such.

Probably smaller: KeSetTimer is unlikely to take the dispatcher spinlock (or one of the several spinlocks in Win7+).


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

“James Harper” wrote in message news:xxxxx@ntdev…
I have a WDF driver that submits requests to the hardware (actually Xen
in this case) and I want a timer to fire after a few seconds to detect
if the request is taking too long. The request should complete in a few
ms and there could be anywhere from 20-100 requests per second.

What is the overhead if I start the timer every time a request is
submitted (eg pushing out the timer due time), and stop the timer if the
hardware queue becomes empty? If the overhead is negligible then that’s
fine, but if it isn’t I will stick with making a timer fire every second
and just detect when the timer has fired a few times in a row without
the hardware completing any outstanding requests. The ‘few seconds’
figure doesn’t have to be precise - an error of +/- 1 second is fine as
it’s way past the time we’d expect the request to have completed anyway.

Thanks

James

Hmmm… I wonder what would sort of answer you’re looking for? “A lot” or “a little” (relative to what)… “1654us” (on what processor)… “too much” or “not worth it” (again, relative to what)?

What you describe above is the traditional way of timing out requests. It’s easy to implement, understand, and maintain… and drivers having been doing it this way for decades. I don’t see any significant disadvantage to doing it this way (a SLIGHT delay in notification of the timeout, certainly).

Thus, in my opinion, any additional overhead at all would be “not worth it”…

Peter
OSR

But make sure don’t get it wrong. Case in point: storport.sys may time out the SRBs VERY prematurely, if the Srb->Timeout is set to 1 second. Been burned by that.

>

Hmmm… I wonder what would sort of answer you’re looking for? “A
lot” or “a
little” (relative to what)… “1654us” (on what processor)… “too
much” or
“not worth it” (again, relative to what)?

Well… If I call WdfTimerStart(timeout = 3 seconds) 100 times a second,
will the time taken in processing the call consume a measurable amount
of time. Does the call do anything other than aquire a lock, modify a
list, then release the lock? I guess it does sound rather vague :slight_smile:

What you describe above is the traditional way of timing out requests.
It’s
easy to implement, understand, and maintain… and drivers having
been doing
it this way for decades. I don’t see any significant disadvantage to
doing it
this way (a SLIGHT delay in notification of the timeout, certainly).

Thus, in my opinion, any additional overhead at all would be “not
worth it”…

Noted. As it turns out, there is now a requirement for a heartbeat
signal anyway so that each end (frontend driver in Windows and backend
driver in Xen) knows that the other is alive. This reduces the problem
complexity considerably as I only need to set the timer twice per
heartbeat interval, which is measured in seconds :slight_smile:

Thanks!

James