>>>
[A] "Most protocols are timing-insensitive, and therefore the
only concern is latency at the end-user site. In that case, timings are
far more interesting at the tens-of-milliseconds level, not at one minute.
What will happen if you miss your timing window? "
<<
O.k. my (industry ratiified) protocol spec is below
For every virtual conenction I amke to switch, I need to send a TTL pkt
evenry 100 sec.
If the swicth doesn’t see atleast 1 TTL pkt from me within 300 =3*100
secs. It will implictly close tis virtual conenction on its side.
My stack can create atmost 100 such virtual conenctions.
So for each virtual cnnection I am creating a periodic timer that expires
every 100 sec.
I seriously doubt that you are doing this. What you are specifying is a
timer interval which tells the scheduler to mark your thread as
schedulable, or call your DPC, at its convenience as soon as possible
after the timer expires, said expiration time being rounded up to the next
15ms timer tick. So if 100sec is not an integer multiple of 15ms (note
that if you choose 1000ms, under ideal conditions you get a notification
at the operating system’s convenience after 1005ms. You have no control
over this latency.
My guess is that 100 timers would have a serious impact on overall
performance, so I would be inclined to use a single timer and a
time-stamped queue of desired notification times kept in sorted order.
When the timer ticks, you look to see if the current time unit is >= the
first element in the queue, you remove that element and dispatch it to its
handler. Since your window is 300sec, the roundoff and latency should not
be a problem. Note that if you insert events in monotonically increasing
timestamps, then insertion times are constant
When protocols are time-sensitive, they usually have large intervals lime
this (5min) to indicate something is not responding properly. Protocols
which are not time-sensitive in this regard only work when there are no
harware, software, or connectivity failures; for example, TCP/IP is
implemented in terms of a timeout model. But “time-sensitive” protocols
demand hard real-time responses within very small timing windows,
As far as I can tell from this revised description, you do not need to
send 60 packets a minute; what you have to send is one keep-alive packet
at an interval not to exceed 300 sec for each of 100 connections, so I’m a
bit curious as to why you reinterpreted the spec to something completely
different in the original posting. Your requirement is simply to send one
keep-alive packet every five minutes, so you have to ask the question you
want answered.
As I indicated, I would probably use a single time-stamped queue.
So there are 100 timers that expire anywhere inclusive [1, 100] secs and
send that pkt out.
[Aa]Hence my question about single vs. multipel timers ‘W.R.T windows
design considerations’ (i.e. protocol spec can’t be changed now or will
take long long time to change etc)
>>
[B] “Only an analysis of your protocol / problem domain can help with
this question, so we here will have to resign that to you unless you can
provide some more cogent information to help us help you”
<<
O.k. I thought the answer to this question might eventually be this i.e.
anylayze my typical traffic and see if such mgmt-pkt burst efefcts in a
negative way.
Also the spec verbatim makes a generic statement,
“to avoid bursts of mgmt-pkt traffic”, add a random delay to the TTL-pkt
and send.
Hence wantd to know about single vs. multipel timers ‘W.R.T windows design
considerations’.
Anyways looks like I need to send the pkt-bursts and investigate acc. to
[B] above.
But in mean time wanted to knwo on [Aa] above, so that I remove this
OS/implementation parameter from my problem space and am left only with
[B].
So if you spread these out uniformly over the interval, there should be no
burstiness issues. For example, if you issue one packet per second, which
is one possible implementation, there should be virtually no measurable
impact on overall performance for the transmission. But my experience in
implementing a variety of real-time systems is that the overhead of
multiple timers is higher than the cost of a single timer, and the
implementation of antiburst overload is easier to manage with a single
queue–otherwise, the statistical distribution can result in bursty
network traffic, with concomitant delays due to network gluts as you hit
“perfect storm” situations. So I think it is going to be easier to manage
maintaining a reasonably uniform distribution of the keep-alive packets
globally across all connections.
Thanks.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer