Multiple timers vs single timer

Hi

I have this general question. I need to send about N (Say 60) network packets with in a minute. Which approach is better

  1. Create N timer (KeInitializeTimer) and have them epire/Dpc/KeSetTimer()
  2. Have one timer and send all the packets (burst) on expiry

(All above periodic timers –> they keep refiring once the DPC gets executed after expiry)

I am looking from protocol wise is it wise to send such a burst of ‘management’ (TTL) packets.
Want to know from windows OS wise, which is preferred approach? my general understanding is having so many Timer DPCs might not be a good thing.

Thx.

>having so many Timer DPCs might not be a good thing.

Too much time in a single DPC is also bad.


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

Why is it necessary to send out packets in under a minute? How is a timer
going to help? Most protocols are timing-insensitive, and therefore the
only concern is latency at the end-user site. In that case, timings are
far more interesting at the tens-of-milliseconds level, not at one minute.
What will happen if you miss your timing window? So I don’t see any
applicability of a timer in this scenario, unless it is about to deal with
initiating a recovery sequence if the timing window is not met. In which
case, you should be redesigning your protocol.
joe

Hi

I have this general question. I need to send about N (Say 60) network
packets with in a minute. Which approach is better

  1. Create N timer (KeInitializeTimer) and have them epire/Dpc/KeSetTimer()
  2. Have one timer and send all the packets (burst) on expiry

(All above periodic timers –> they keep refiring once the DPC gets
executed after expiry)

I am looking from protocol wise is it wise to send such a burst of
‘management’ (TTL) packets.
Want to know from windows OS wise, which is preferred approach? my general
understanding is having so many Timer DPCs might not be a good thing.

Thx.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

IMHO, at this resolution (approx 1 packet per second) modern hardware will
show no significant difference in performance. The more practical question
is whether you want to send 1 packet per second, or 60 packets after 1
minute. Only an analysis of your protocol / problem domain can help with
this question, so we here will have to resign that to you unless you can
provide some more cogent information to help us help you

wrote in message news:xxxxx@ntdev…

Hi

I have this general question. I need to send about N (Say 60) network
packets with in a minute. Which approach is better

  1. Create N timer (KeInitializeTimer) and have them epire/Dpc/KeSetTimer()
  2. Have one timer and send all the packets (burst) on expiry

(All above periodic timers –> they keep refiring once the DPC gets executed
after expiry)

I am looking from protocol wise is it wise to send such a burst of
‘management’ (TTL) packets.
Want to know from windows OS wise, which is preferred approach? my general
understanding is having so many Timer DPCs might not be a good thing.

Thx.

>>
[A] "Most protocols are timing-insensitive, and therefore the
only concern is latency at the end-user site. In that case, timings are
far more interesting at the tens-of-milliseconds level, not at one minute.
What will happen if you miss your timing window? "
<<

O.k. my (industry ratiified) protocol spec is below

For every virtual conenction I amke to switch, I need to send a TTL pkt evenry 100 sec.
If the swicth doesn’t see atleast 1 TTL pkt from me within 300 =3*100 secs. It will implictly close tis virtual conenction on its side.

My stack can create atmost 100 such virtual conenctions.

So for each virtual cnnection I am creating a periodic timer that expires every 100 sec.
So there are 100 timers that expire anywhere inclusive [1, 100] secs and send that pkt out.

[Aa]Hence my question about single vs. multipel timers ‘W.R.T windows design considerations’ (i.e. protocol spec can’t be changed now or will take long long time to change etc)

>
[B] “Only an analysis of your protocol / problem domain can help with
this question, so we here will have to resign that to you unless you can
provide some more cogent information to help us help you”
<<

O.k. I thought the answer to this question might eventually be this i.e. anylayze my typical traffic and see if such mgmt-pkt burst efefcts in a negative way.

Also the spec verbatim makes a generic statement,
“to avoid bursts of mgmt-pkt traffic”, add a random delay to the TTL-pkt and send.

Hence wantd to know about single vs. multipel timers ‘W.R.T windows design considerations’.

Anyways looks like I need to send the pkt-bursts and investigate acc. to [B] above.
But in mean time wanted to knwo on [Aa] above, so that I remove this OS/implementation parameter from my problem space and am left only with [B].

Thanks.

>>>

[A] "Most protocols are timing-insensitive, and therefore the
only concern is latency at the end-user site. In that case, timings are
far more interesting at the tens-of-milliseconds level, not at one minute.
What will happen if you miss your timing window? "
<<

O.k. my (industry ratiified) protocol spec is below

For every virtual conenction I amke to switch, I need to send a TTL pkt
evenry 100 sec.
If the swicth doesn’t see atleast 1 TTL pkt from me within 300 =3*100
secs. It will implictly close tis virtual conenction on its side.

My stack can create atmost 100 such virtual conenctions.

So for each virtual cnnection I am creating a periodic timer that expires
every 100 sec.

I seriously doubt that you are doing this. What you are specifying is a
timer interval which tells the scheduler to mark your thread as
schedulable, or call your DPC, at its convenience as soon as possible
after the timer expires, said expiration time being rounded up to the next
15ms timer tick. So if 100sec is not an integer multiple of 15ms (note
that if you choose 1000ms, under ideal conditions you get a notification
at the operating system’s convenience after 1005ms. You have no control
over this latency.

My guess is that 100 timers would have a serious impact on overall
performance, so I would be inclined to use a single timer and a
time-stamped queue of desired notification times kept in sorted order.
When the timer ticks, you look to see if the current time unit is >= the
first element in the queue, you remove that element and dispatch it to its
handler. Since your window is 300sec, the roundoff and latency should not
be a problem. Note that if you insert events in monotonically increasing
timestamps, then insertion times are constant

When protocols are time-sensitive, they usually have large intervals lime
this (5min) to indicate something is not responding properly. Protocols
which are not time-sensitive in this regard only work when there are no
harware, software, or connectivity failures; for example, TCP/IP is
implemented in terms of a timeout model. But “time-sensitive” protocols
demand hard real-time responses within very small timing windows,

As far as I can tell from this revised description, you do not need to
send 60 packets a minute; what you have to send is one keep-alive packet
at an interval not to exceed 300 sec for each of 100 connections, so I’m a
bit curious as to why you reinterpreted the spec to something completely
different in the original posting. Your requirement is simply to send one
keep-alive packet every five minutes, so you have to ask the question you
want answered.

As I indicated, I would probably use a single time-stamped queue.

So there are 100 timers that expire anywhere inclusive [1, 100] secs and
send that pkt out.

[Aa]Hence my question about single vs. multipel timers ‘W.R.T windows
design considerations’ (i.e. protocol spec can’t be changed now or will
take long long time to change etc)

>>
[B] “Only an analysis of your protocol / problem domain can help with
this question, so we here will have to resign that to you unless you can
provide some more cogent information to help us help you”
<<

O.k. I thought the answer to this question might eventually be this i.e.
anylayze my typical traffic and see if such mgmt-pkt burst efefcts in a
negative way.

Also the spec verbatim makes a generic statement,
“to avoid bursts of mgmt-pkt traffic”, add a random delay to the TTL-pkt
and send.

Hence wantd to know about single vs. multipel timers ‘W.R.T windows design
considerations’.

Anyways looks like I need to send the pkt-bursts and investigate acc. to
[B] above.
But in mean time wanted to knwo on [Aa] above, so that I remove this
OS/implementation parameter from my problem space and am left only with
[B].

So if you spread these out uniformly over the interval, there should be no
burstiness issues. For example, if you issue one packet per second, which
is one possible implementation, there should be virtually no measurable
impact on overall performance for the transmission. But my experience in
implementing a variety of real-time systems is that the overhead of
multiple timers is higher than the cost of a single timer, and the
implementation of antiburst overload is easier to manage with a single
queue–otherwise, the statistical distribution can result in bursty
network traffic, with concomitant delays due to network gluts as you hit
“perfect storm” situations. So I think it is going to be easier to manage
maintaining a reasonably uniform distribution of the keep-alive packets
globally across all connections.

Thanks.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> I have this general question. I need to send about N (Say 60) network packets with in a minute.

Which approach is better

  1. Create N timer (KeInitializeTimer) and have them epire/Dpc/KeSetTimer()
  1. Have one timer and send all the packets (burst) on expiry (All above periodic
    timers –> they keep refiring once the DPC gets executed after expiry)

Actually, what I am unable to understand is why you need timer DPCs here, in the first place…

Given the above description, the most obvious approach seems to be a kernel thread of reasonably high priority (i.e. somewhere in the RT priority range, although not necessarily high) that sleeps most of the time
and wakes up once in M seconds to send N packets and go to sleep again. The values of M and N may vary, but, judging from your protocol description, sending packets in a burst is perfectly acceptable. I really see no reason why you need to do things at DPC level…

Anton Bassov

> For every virtual conenction I amke to switch, I need to send a TTL pkt evenry 100 sec.

Can you use TCP and use setsockopt() to switch on sending of keepalives?


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

Okay, so your problem is really not very hard. The frequency which which
you need to send ‘TTL’ packets is 100 seconds and the maximum latency of a
TTL packet for a connection is 300 seconds. You should have no problem
coding within those parameters for several thousand connections on modern
hardware and any recent version of Windows (I’m assuming a TTL packet is
small; say 64 bytes)

You basically have three options:

  1. you can create a timer for each connection, and rely on the system to
    call you when it gets finished with your other timers and other stuff.
    Because you have 200 seconds to get the packet to the switch, assuming the
    the network latency is reasonable, you don’t really care that this may be
    inefficient as it works
  2. you can create a single timer that has a resolution of say 1 second
    (small enough with respect to 100 seconds so that TTL packets will be sent
    at the ‘right’ time). In the timer proc, check a per connection last send
    time and decide if a TTL should be sent on this loop. This is where is is
    easy to add the random offset part, as either each loop has a random number
    in that it adds (subtracts) from the elapsed time, or after the TTL is sent,
    a random number is added (subtracted) to the last send time
  3. you create a single timer that has a resolution of 100 seconds and when
    it expires, send a TTL on all connections with a random delay between each
    connection.

All of these options work. #1 is the most system load, and #3 is the least,
but the patterns for when packets are sent vary wildly. Option #2 is
probably a good compromise between compatibility with systems that don’t
exactly follow the spec, and overall system load, but I can’t really assess
that well.

The key to reducing the load in options #2 is that a single timer
routine checks all connections. But this level of work is likely in the
realm on incidental unless your system is performance critical, so I would
use whatever is easiest for you to understand.

wrote in message news:xxxxx@ntdev…

>
[A] "Most protocols are timing-insensitive, and therefore the
only concern is latency at the end-user site. In that case, timings are
far more interesting at the tens-of-milliseconds level, not at one minute.
What will happen if you miss your timing window? "
<<

O.k. my (industry ratiified) protocol spec is below

For every virtual conenction I amke to switch, I need to send a TTL pkt
evenry 100 sec.
If the swicth doesn’t see atleast 1 TTL pkt from me within 300 =3*100 secs.
It will implictly close tis virtual conenction on its side.

My stack can create atmost 100 such virtual conenctions.

So for each virtual cnnection I am creating a periodic timer that expires
every 100 sec.
So there are 100 timers that expire anywhere inclusive [1, 100] secs and
send that pkt out.

[Aa]Hence my question about single vs. multipel timers ‘W.R.T windows design
considerations’ (i.e. protocol spec can’t be changed now or will take long
long time to change etc)

>
[B] “Only an analysis of your protocol / problem domain can help with
this question, so we here will have to resign that to you unless you can
provide some more cogent information to help us help you”
<<

O.k. I thought the answer to this question might eventually be this i.e.
anylayze my typical traffic and see if such mgmt-pkt burst efefcts in a
negative way.

Also the spec verbatim makes a generic statement,
“to avoid bursts of mgmt-pkt traffic”, add a random delay to the TTL-pkt and
send.

Hence wantd to know about single vs. multipel timers ‘W.R.T windows design
considerations’.

Anyways looks like I need to send the pkt-bursts and investigate acc. to [B]
above.
But in mean time wanted to knwo on [Aa] above, so that I remove this
OS/implementation parameter from my problem space and am left only with [B].

Thanks.

Yes, but why not to do it in context of a dedicated thread and, instead, use timer DPCs here??? Do you see any reason for this??? To be honest, I don’t. BTW, if you do it in context of a thread practical differences between the three above mentioned options become just negligible…

Anton Bassov

nitpicking mode – 100 is the period. Frequency is 0.01 hz(cycles/second).
Frequency is 1 over T.

>>#1 is the most system load,

>the overhead of multiple timers is higher than the cost of a single timer
Yes I already have 1) and that answers my question.

>#3 is the least,
Hence wanted to do 3), because of its simplicity - “no need to Spinklock and loop through the connection llink-list etc”. Though this Slock is not the oen in speed-path, still want to reduce slock acquires (and #timer_fires as possible)

>
The key to reducing the load in options #2 is that a single timer
routine checks all connections. But this level of work is likely in the
realm on incidental unless your system is performance critical,
<<
Yes if 3) doesn’t work out well, will just go for 2) basically “wanted to either reduce number Slock acquires or timer fires if not both”.

Will investigate all below as well
…obvious approach seems to be a kernel thread of reasonably high priority (i.e. somewhere in the RT priority range, although not necessarily high)…
…Yes, but why not to do it in context of a dedicated thread and, instead, use timer DPCs here???..
…Can you use TCP and use setsockopt() to switch on sending of keepalives?..

>
Your requirement is simply to send one keep-alive packet every five minutes, so you have to ask the question you want answered.
<<
Yes initially started the question with such quantums to be generic, since I am not real-time and do not fault if the resolution is <= 15-MILLI-secs.

But depending on the hops, it is better not to have the linit as 5 mins for ‘average’ case. i.e. better send it well before 300secs if not absolutly/exactly by 100 secs. I guess since my quantums are in the order of secs, this is moot.