This is really an interesting concept, though I don’t remember some of the
detail. ** I often bumped into this kind of thing, perhaps because once I
did some mathematical modelling on this particular area, when adding memory
to network controller was very expensive, but it is still applicabe in a
variety of performance related analysis **
First thing first, as Tim suggested “where is the beef?”. Did the OP (
including any associates) measured and found that there is a performance
problem right around the Queue(s) ? It is very very important first step in
any performance oriented thinking ( barring the fact that some of the easy
ones: Use interlocked instead of Spinlocks etc. etc. )
Second, what Max proposed is one way to try ( that is to see if socket level
options can be of any use to push the performances).
Of course, there are others too. And for that telescoping is important, to
narrow down the areas where performances could be a problem.
From the Queuing perspective - When something is not known, try to base on
the probablistic model. So in this particular case, if the avg. arrival rate
is greater ( even with slight margin ) than avg. departure, long run
probabilty is that the machine would be flooded with packets, no amount of
priority shifting going to help the overall system.
On the other hand, if the avg. arrival rate is less ( even with slight
margin) than avg. departure, long run probablity is that the queue would be
empty. **But, network traffic, and lot of other things follow Poisson
distribution, and network has an extra dimension - bursty**. And under this
particular trait, network queues sees size differences and also follows some
distributions. Now depending on the nature of the application(s), it might
be necessary to exploit just that priority mechanism !!!
Also ingress and outgress queues could see different Q lengths.
But OP’s description of the problem is one of the very poor ( sorry )
description of the problem and ways of thinking to solve the problem. There
is TC api, GQos APIs etc., that with the combinations of LSP providers in
u-space and pkt scheduler in k-space might do a fantastic job without
bogging down to ( home grown )kmode packet queuing.
-pro
----- Original Message -----
From: “Ray Trent”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Friday, December 07, 2007 1:51 PM
Subject: Re:[ntdev] 5 Tuples
> It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
> usually means one of two things:
>
> 1) A widely misunderstood, misleading, and inaccurate description of
> processes that have defect rates of < 3.4ppm (which is actually ±4.5
> sigmas on normally distributed random samples).
>
> 2) (Less than) Six times the standard deviation of a random data set (away
> from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of the
> samples will always be closer than 6 sigmas from the mean of any
> (non-degenerate) random distribution.
>
> In this case, it’s an overly fancy (but slightly amusing) way to say that
> either >99.99966% or 97% of the time the network packet queue length will
> be 1.
>
> I seriously doubt that, though, as I would expect that most of the time
> the network packet queue length would be 0 :-).
>
> Martin O’Brien wrote:
>> Tim:
>>
>> What, in a nutshell, is a ‘six sigma average?’
>>
>> Thanks,
>>
>> mm
>>
>> Tim Roberts wrote:
>>> A P wrote:
>>>> Hi all,
>>>>
>>>> I plan to implement a network acclerator, by which all network packets
>>>> going out from a particular application (user selected) would gain
>>>> priority over normal application packets. It involves packet queing in
>>>> a driver.
>>>>
>>>> I have a design in mind, and I have the driver architecture, but I
>>>> want to know, is there a way of getting the process ID and 5 tuple
>>>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>>>> want to ues an LSP as it has so many cons.
>>>
>>> Do you have any performance research that suggests this will do any good
>>> at all? I would have guessed that the six sigma average network packet
>>> queue length was 1.
>>>
>>
>
> –
> Ray
> (If you want to reply to me off list, please remove “spamblock.” from my
> email address)
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer