5 Tuples

Hi all,

I plan to implement a network acclerator, by which all network packets going
out from a particular application (user selected) would gain priority over
normal application packets. It involves packet queing in a driver.

I have a design in mind, and I have the driver architecture, but I want to
know, is there a way of getting the process ID and 5 tuple info from an APP
in kernel (NDIS) without involving an LSP? I don’t want to ues an LSP as it
has so many cons.

Product has launch priority in Vista, but backward compatibility (if
possible) should be there.

Regards,

AP

A P wrote:

Hi all,

I plan to implement a network acclerator, by which all network packets
going out from a particular application (user selected) would gain
priority over normal application packets. It involves packet queing in
a driver.

I have a design in mind, and I have the driver architecture, but I
want to know, is there a way of getting the process ID and 5 tuple
info from an APP in kernel (NDIS) without involving an LSP? I don’t
want to ues an LSP as it has so many cons.

Do you have any performance research that suggests this will do any good
at all? I would have guessed that the six sigma average network packet
queue length was 1.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

and these packets will gain priority at the next 5 router’s outbound queues
how?

is there a way of getting the process ID and 5 tuple info from an APP in
kernel (NDIS) without involving

On Vista, that would be the Windows Filtering Platform. On downlevel, that
would be a TDI filter.

I don’t want to ues an LSP as it has so many cons.

Compared to what? A TDI filter is not exactly a walk in the park. If the
applications you wish to ‘accelerate’ (prioritize would be perhaps more
accurate) are good citizen Winsock applications, then, an LSP is probably a
much better choice. Indeed, you might just find that doing so and
interacting with the platform supplied packet scheduler / prioritization
facilities works for you.

But what would be the fun in that :slight_smile:

But as Tim pointed out… unless you are stuck with some seriously crappy
link speed or an incredibly busy host, just what do you expect to
‘influence’ by rearranging a few packets on the host (originating) side of
the dialog? There must be more to this problem that I for one do not
understand so please don’t take this as criticism but merely as a suggestion
to analyze your requirements carefully before you launch into a whole lot of
kernel development that ends up not having the desired effect. Tim’s
comment about six sigma had me scrounging for one of those old network
analysis text books from when ‘high speed’ links were 19.2Kbps and
bitstuffing was an artform.

Good Luck,
Dave Cattley
Consulting Engineer
Systems Software Development


From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of A P
Sent: Thursday, December 06, 2007 8:20 PM
To: Windows System Software Devs Interest List
Subject: [ntdev] 5 Tuples

Hi all,

I plan to implement a network acclerator, by which all network packets going
out from a particular application (user selected) would gain priority over
normal application packets. It involves packet queing in a driver.

I have a design in mind, and I have the driver architecture, but I want to
know, is there a way of getting the process ID and 5 tuple info from an APP
in kernel (NDIS) without involving an LSP? I don’t want to ues an LSP as it
has so many cons.

Product has launch priority in Vista, but backward compatibility (if
possible) should be there.

Regards,

AP
— NTDEV is sponsored by OSR For our schedule of WDF, WDM, debugging and
other seminars visit: http://www.osr.com/seminars To unsubscribe, visit the
List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

> On Vista, that would be the Windows Filtering Platform. On downlevel, that

would be a TDI filter.

…combined with NDIS IM. The only job of the TDI filter is to map per-process
settings to TCP or UDP port numbers. The scheduler itself should be NDIS IM.

BTW - MS’s PSCHED is possibly the out-of-the-box solution for this, I just
don’t remember how to govern it (setsockopt() to set the QoS bits in the IP
header?)


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

Tim:

What, in a nutshell, is a ‘six sigma average?’

Thanks,

mm

Tim Roberts wrote:

A P wrote:
> Hi all,
>
> I plan to implement a network acclerator, by which all network packets
> going out from a particular application (user selected) would gain
> priority over normal application packets. It involves packet queing in
> a driver.
>
> I have a design in mind, and I have the driver architecture, but I
> want to know, is there a way of getting the process ID and 5 tuple
> info from an APP in kernel (NDIS) without involving an LSP? I don’t
> want to ues an LSP as it has so many cons.

Do you have any performance research that suggests this will do any good
at all? I would have guessed that the six sigma average network packet
queue length was 1.

Not answering for Tim, of course, as he will likely give you the real answer
but basically it is a term taken (typically) from manufacturing to describe
the range of variation (or tolerance) of a specified value observed in the
quality metrics of the manufacturing process.

In this context I took it to mean that any almost 100% of the time, the
transmit queue length is *one* packet (the one you are sending) and that
there is almost no effect you could have by ‘delaying’ other packets since
those packets are non-existant (in the average queueing model).

Statistics, queuing theory, OMG! Look-out, its engineering!

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Martin O’Brien
Sent: Friday, December 07, 2007 2:40 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] 5 Tuples

Tim:

What, in a nutshell, is a ‘six sigma average?’

Thanks,

mm

Tim Roberts wrote:

A P wrote:
> Hi all,
>
> I plan to implement a network acclerator, by which all network
> packets going out from a particular application (user selected) would
> gain priority over normal application packets. It involves packet
> queing in a driver.
>
> I have a design in mind, and I have the driver architecture, but I
> want to know, is there a way of getting the process ID and 5 tuple
> info from an APP in kernel (NDIS) without involving an LSP? I don’t
> want to ues an LSP as it has so many cons.

Do you have any performance research that suggests this will do any
good at all? I would have guessed that the six sigma average network
packet queue length was 1.


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Thanks, Dave.

David R. Cattley wrote:

Not answering for Tim, of course, as he will likely give you the real answer
but basically it is a term taken (typically) from manufacturing to describe
the range of variation (or tolerance) of a specified value observed in the
quality metrics of the manufacturing process.

In this context I took it to mean that any almost 100% of the time, the
transmit queue length is *one* packet (the one you are sending) and that
there is almost no effect you could have by ‘delaying’ other packets since
those packets are non-existant (in the average queueing model).

Statistics, queuing theory, OMG! Look-out, its engineering!

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Martin O’Brien
Sent: Friday, December 07, 2007 2:40 PM
To: Windows System Software Devs Interest List
Subject: Re:[ntdev] 5 Tuples

Tim:

What, in a nutshell, is a ‘six sigma average?’

Thanks,

mm

Tim Roberts wrote:
> A P wrote:
>> Hi all,
>>
>> I plan to implement a network acclerator, by which all network
>> packets going out from a particular application (user selected) would
>> gain priority over normal application packets. It involves packet
>> queing in a driver.
>>
>> I have a design in mind, and I have the driver architecture, but I
>> want to know, is there a way of getting the process ID and 5 tuple
>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>> want to ues an LSP as it has so many cons.
> Do you have any performance research that suggests this will do any
> good at all? I would have guessed that the six sigma average network
> packet queue length was 1.
>


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Martin O’Brien wrote:

What, in a nutshell, is a ‘six sigma average?’

The Greek letter “sigma” is used to mean standard deviation. In any
normally distributed population, 68% of the results are within one
standard deviation of the average (one sigma). 95% are within two
sigmas, 99.7% within three sigmas, etc. Now that you’ve made me think
about it, I have actually used the term incorrectly. I was trying to
say “99% of the time”. That’s what I get for showing off.

There was a big hype several years ago about “six sigma quality”; that
is trying to say your defect rate should be so low that your quality
level is the equivalent of six sigmas, which is 99.9999998%.


Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.

Thanks, Tim.

mm
Tim Roberts wrote:

Martin O’Brien wrote:
> What, in a nutshell, is a ‘six sigma average?’

The Greek letter “sigma” is used to mean standard deviation. In any
normally distributed population, 68% of the results are within one
standard deviation of the average (one sigma). 95% are within two
sigmas, 99.7% within three sigmas, etc. Now that you’ve made me think
about it, I have actually used the term incorrectly. I was trying to
say “99% of the time”. That’s what I get for showing off.

There was a big hype several years ago about “six sigma quality”; that
is trying to say your defect rate should be so low that your quality
level is the equivalent of six sigmas, which is 99.9999998%.

It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
usually means one of two things:

  1. A widely misunderstood, misleading, and inaccurate description of
    processes that have defect rates of < 3.4ppm (which is actually ±4.5
    sigmas on normally distributed random samples).

  2. (Less than) Six times the standard deviation of a random data set
    (away from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of
    the samples will always be closer than 6 sigmas from the mean of any
    (non-degenerate) random distribution.

In this case, it’s an overly fancy (but slightly amusing) way to say
that either >99.99966% or 97% of the time the network packet queue
length will be 1.

I seriously doubt that, though, as I would expect that most of the time
the network packet queue length would be 0 :-).

Martin O’Brien wrote:

Tim:

What, in a nutshell, is a ‘six sigma average?’

Thanks,

mm

Tim Roberts wrote:
> A P wrote:
>> Hi all,
>>
>> I plan to implement a network acclerator, by which all network packets
>> going out from a particular application (user selected) would gain
>> priority over normal application packets. It involves packet queing in
>> a driver.
>>
>> I have a design in mind, and I have the driver architecture, but I
>> want to know, is there a way of getting the process ID and 5 tuple
>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>> want to ues an LSP as it has so many cons.
>
> Do you have any performance research that suggests this will do any good
> at all? I would have guessed that the six sigma average network packet
> queue length was 1.
>


Ray
(If you want to reply to me off list, please remove “spamblock.” from my
email address)

Thank, Ray.

mm
Ray Trent wrote:

It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
usually means one of two things:

  1. A widely misunderstood, misleading, and inaccurate description of
    processes that have defect rates of < 3.4ppm (which is actually ±4.5
    sigmas on normally distributed random samples).

  2. (Less than) Six times the standard deviation of a random data set
    (away from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of
    the samples will always be closer than 6 sigmas from the mean of any
    (non-degenerate) random distribution.

In this case, it’s an overly fancy (but slightly amusing) way to say
that either >99.99966% or 97% of the time the network packet queue
length will be 1.

I seriously doubt that, though, as I would expect that most of the time
the network packet queue length would be 0 :-).

Martin O’Brien wrote:
> Tim:
>
> What, in a nutshell, is a ‘six sigma average?’
>
> Thanks,
>
> mm
>
> Tim Roberts wrote:
>> A P wrote:
>>> Hi all,
>>>
>>> I plan to implement a network acclerator, by which all network packets
>>> going out from a particular application (user selected) would gain
>>> priority over normal application packets. It involves packet queing in
>>> a driver.
>>>
>>> I have a design in mind, and I have the driver architecture, but I
>>> want to know, is there a way of getting the process ID and 5 tuple
>>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>>> want to ues an LSP as it has so many cons.
>>
>> Do you have any performance research that suggests this will do any good
>> at all? I would have guessed that the six sigma average network packet
>> queue length was 1.
>>
>

This is really an interesting concept, though I don’t remember some of the
detail. ** I often bumped into this kind of thing, perhaps because once I
did some mathematical modelling on this particular area, when adding memory
to network controller was very expensive, but it is still applicabe in a
variety of performance related analysis **

First thing first, as Tim suggested “where is the beef?”. Did the OP (
including any associates) measured and found that there is a performance
problem right around the Queue(s) ? It is very very important first step in
any performance oriented thinking ( barring the fact that some of the easy
ones: Use interlocked instead of Spinlocks etc. etc. )

Second, what Max proposed is one way to try ( that is to see if socket level
options can be of any use to push the performances).

Of course, there are others too. And for that telescoping is important, to
narrow down the areas where performances could be a problem.

From the Queuing perspective - When something is not known, try to base on
the probablistic model. So in this particular case, if the avg. arrival rate
is greater ( even with slight margin ) than avg. departure, long run
probabilty is that the machine would be flooded with packets, no amount of
priority shifting going to help the overall system.
On the other hand, if the avg. arrival rate is less ( even with slight
margin) than avg. departure, long run probablity is that the queue would be
empty. **But, network traffic, and lot of other things follow Poisson
distribution, and network has an extra dimension - bursty**. And under this
particular trait, network queues sees size differences and also follows some
distributions. Now depending on the nature of the application(s), it might
be necessary to exploit just that priority mechanism !!!

Also ingress and outgress queues could see different Q lengths.

But OP’s description of the problem is one of the very poor ( sorry )
description of the problem and ways of thinking to solve the problem. There
is TC api, GQos APIs etc., that with the combinations of LSP providers in
u-space and pkt scheduler in k-space might do a fantastic job without
bogging down to ( home grown )kmode packet queuing.

-pro

----- Original Message -----
From: “Ray Trent”
Newsgroups: ntdev
To: “Windows System Software Devs Interest List”
Sent: Friday, December 07, 2007 1:51 PM
Subject: Re:[ntdev] 5 Tuples

> It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
> usually means one of two things:
>
> 1) A widely misunderstood, misleading, and inaccurate description of
> processes that have defect rates of < 3.4ppm (which is actually ±4.5
> sigmas on normally distributed random samples).
>
> 2) (Less than) Six times the standard deviation of a random data set (away
> from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of the
> samples will always be closer than 6 sigmas from the mean of any
> (non-degenerate) random distribution.
>
> In this case, it’s an overly fancy (but slightly amusing) way to say that
> either >99.99966% or 97% of the time the network packet queue length will
> be 1.
>
> I seriously doubt that, though, as I would expect that most of the time
> the network packet queue length would be 0 :-).
>
> Martin O’Brien wrote:
>> Tim:
>>
>> What, in a nutshell, is a ‘six sigma average?’
>>
>> Thanks,
>>
>> mm
>>
>> Tim Roberts wrote:
>>> A P wrote:
>>>> Hi all,
>>>>
>>>> I plan to implement a network acclerator, by which all network packets
>>>> going out from a particular application (user selected) would gain
>>>> priority over normal application packets. It involves packet queing in
>>>> a driver.
>>>>
>>>> I have a design in mind, and I have the driver architecture, but I
>>>> want to know, is there a way of getting the process ID and 5 tuple
>>>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>>>> want to ues an LSP as it has so many cons.
>>>
>>> Do you have any performance research that suggests this will do any good
>>> at all? I would have guessed that the six sigma average network packet
>>> queue length was 1.
>>>
>>
>
> –
> Ray
> (If you want to reply to me off list, please remove “spamblock.” from my
> email address)
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

> I plan to implement a network acclerator, by which all network packets going out

from a particular application (user selected) would gain priority over normal
application packets. It involves packet queing in a driver.

Incredibly stupid idea…

I would advise you to learn a bit of TCPIP fundamentals. At this point you will know that any TCP send involves both sending and receiving data; you will know how the transmission is done and how TCP window gets advertised; you will know how the transmission can get slowed down by so-called “silly window syndrome” and why it may happen; you will know how sender can try to avoid it and what “result” it can achieve instead if the target app does W-W-R sequences and the remote host implements delayed acknowledgement (i.e. “Nagle hits delayed ACK” problem) ;etc,etc,etc…

At this point, you will realize that making packets that are sent by the app X have a priority over the ones that are sent by the app Y is not going to lead you anywhere - if you want to optimize
network performance, you have to do it on per-connection basis, and all you actions have to depend on the specifics of this particular connection and on the pattern a given app *currently* send/receives data (this pattern may change,and you actions have to be adjusted accordingly). Therefore, network optimization is not as easy as you seem to believe - if you just prioritize packets over one another on per-app basis, you have all chances to degrade the performance, instead of improving it…

Anton Bassov

Actually, the real time/isochronous issues are the only ones which are
valid for packet scheduling.


Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
xxxxx@storagecraft.com
http://www.storagecraft.com

wrote in message news:xxxxx@ntdev…
> > I plan to implement a network acclerator, by which all network packets
going out
> > from a particular application (user selected) would gain priority over
normal
> > application packets. It involves packet queing in a driver.
>
> Incredibly stupid idea…
>
> I would advise you to learn a bit of TCPIP fundamentals. At this point you
will know that any TCP send involves both sending and receiving data; you will
know how the transmission is done and how TCP window gets advertised; you will
know how the transmission can get slowed down by so-called “silly window
syndrome” and why it may happen; you will know how sender can try to avoid it
and what “result” it can achieve instead if the target app does W-W-R sequences
and the remote host implements delayed acknowledgement (i.e. “Nagle hits
delayed ACK” problem) ;etc,etc,etc…
>
> At this point, you will realize that making packets that are sent by the app
X have a priority over the ones that are sent by the app Y is not going to lead
you anywhere - if you want to optimize
> network performance, you have to do it on per-connection basis, and all you
actions have to depend on the specifics of this particular connection and on
the pattern a given app currently send/receives data (this pattern may
change,and you actions have to be adjusted accordingly). Therefore, network
optimization is not as easy as you seem to believe - if you just prioritize
packets over one another on per-app basis, you have all chances to degrade the
performance, instead of improving it…
>
> Anton Bassov
>

> Actually, the real time/isochronous issues are the only ones which

are valid for packet scheduling.

Well, it does not really make sense to speak about these issues in context of TCP, does it?
They can only arise in a situation when sending packets in time is more important
that insuring packet delivery (for example, voice app), so that these apps rely upon UDP-based protocols. However, it does not really seem to be the thing the OP speaks about - instead, it looks like he wants to write a *general*purpose* network accelerator and to prioritize app X’s packets over app Y’s ones upon the *user’s* choice, regardless of the app’s specifics. This is why I said his idea is incredibly stupid - you just don’t write general-purpose network accelerators this way…

Anton Bassov

If you assume a Poisson distribution, a usable rule of thumb is that 90% of
your accesses will take up to 3 average access times, while 99% of your
accesses will take less than 5 average access times. Of course, the trick is
to compute the access time, it may not be that simple.

Alberto.

----- Original Message -----
From: “Prokash Sinha”
To: “Windows System Software Devs Interest List”
Sent: Friday, December 07, 2007 5:08 PM
Subject: Re: Re:[ntdev] 5 Tuples

> This is really an interesting concept, though I don’t remember some of the
> detail. I often bumped into this kind of thing, perhaps because once I
> did some mathematical modelling on this particular area, when adding
> memory
> to network controller was very expensive, but it is still applicabe in a
> variety of performance related analysis

>
> First thing first, as Tim suggested “where is the beef?”. Did the OP (
> including any associates) measured and found that there is a performance
> problem right around the Queue(s) ? It is very very important first step
> in
> any performance oriented thinking ( barring the fact that some of the easy
> ones: Use interlocked instead of Spinlocks etc. etc. )
>
> Second, what Max proposed is one way to try ( that is to see if socket
> level
> options can be of any use to push the performances).
>
> Of course, there are others too. And for that telescoping is important, to
> narrow down the areas where performances could be a problem.
>
> From the Queuing perspective - When something is not known, try to base on
> the probablistic model. So in this particular case, if the avg. arrival
> rate
> is greater ( even with slight margin ) than avg. departure, long run
> probabilty is that the machine would be flooded with packets, no amount of
> priority shifting going to help the overall system.
> On the other hand, if the avg. arrival rate is less ( even with slight
> margin) than avg. departure, long run probablity is that the queue would
> be
> empty. But, network traffic, and lot of other things follow Poisson
> distribution, and network has an extra dimension - bursty
. And under
> this
> particular trait, network queues sees size differences and also follows
> some
> distributions. Now depending on the nature of the application(s), it might
> be necessary to exploit just that priority mechanism !!!
>
> Also ingress and outgress queues could see different Q lengths.
>
> But OP’s description of the problem is one of the very poor ( sorry )
> description of the problem and ways of thinking to solve the problem.
> There is TC api, GQos APIs etc., that with the combinations of LSP
> providers in u-space and pkt scheduler in k-space might do a fantastic
> job without bogging down to ( home grown )kmode packet queuing.
>
> -pro
>
> ----- Original Message -----
> From: “Ray Trent”
> Newsgroups: ntdev
> To: “Windows System Software Devs Interest List”
> Sent: Friday, December 07, 2007 1:51 PM
> Subject: Re:[ntdev] 5 Tuples
>
>
>> It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
>> usually means one of two things:
>>
>> 1) A widely misunderstood, misleading, and inaccurate description of
>> processes that have defect rates of < 3.4ppm (which is actually ±4.5
>> sigmas on normally distributed random samples).
>>
>> 2) (Less than) Six times the standard deviation of a random data set
>> (away from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of the
>> samples will always be closer than 6 sigmas from the mean of any
>> (non-degenerate) random distribution.
>>
>> In this case, it’s an overly fancy (but slightly amusing) way to say that
>> either >99.99966% or 97% of the time the network packet queue length will
>> be 1.
>>
>> I seriously doubt that, though, as I would expect that most of the time
>> the network packet queue length would be 0 :-).
>>
>> Martin O’Brien wrote:
>>> Tim:
>>>
>>> What, in a nutshell, is a ‘six sigma average?’
>>>
>>> Thanks,
>>>
>>> mm
>>>
>>> Tim Roberts wrote:
>>>> A P wrote:
>>>>> Hi all,
>>>>>
>>>>> I plan to implement a network acclerator, by which all network packets
>>>>> going out from a particular application (user selected) would gain
>>>>> priority over normal application packets. It involves packet queing in
>>>>> a driver.
>>>>>
>>>>> I have a design in mind, and I have the driver architecture, but I
>>>>> want to know, is there a way of getting the process ID and 5 tuple
>>>>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>>>>> want to ues an LSP as it has so many cons.
>>>>
>>>> Do you have any performance research that suggests this will do any
>>>> good
>>>> at all? I would have guessed that the six sigma average network packet
>>>> queue length was 1.
>>>>
>>>
>>
>> –
>> Ray
>> (If you want to reply to me off list, please remove “spamblock.” from my
>> email address)
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer

There are certain things ( as you probably caught here ) I skipped
intentionally, since I don’t know how useful it is for A P ( the original
poster ). But just for some high level clarifications -

  1. The best model to take here is really poission, since the inter arrival
    time distribution is exponential ( which is memory less meaing we really
    don’t know when is the next or next set of packets going to arrive). Now
    this really serves up to get to Markovian ( meaning memory less ) queuing.
    As most of us know there are several flavors ( M/M/n, where n = 1, 2, … ).
    Now the parameter *lamda* ( look at wiki for some clarification) is what
    drives different types of access time. Here I’m normalizing acces time to
    some unit of time, and 3 means I’m on the 3rd spot of the modelled queue. So
    essentially are just looking to see ( in this case ) what are the expected
    queue lengths under different ( possibly) practical parameters. When I
    modelled leaky bucket algorithms, I took this approach under dynamic flow
    control. Now as you suspected it is not always the case of hard numbers
    like: 3 or 5 or 7 or some such.

  2. Now could it be applicable? Well, as we know if someone tries to do just
    *priority queues* at ndis layer, that might not give a measurable
    performance boost, since tcp windows might come into play. For UDP, it might
    be a thought ( or path) to take into account.

  3. I think, if an application uses only TCP, then QOS api’s are the only
    means to improve the performance, and it is flow specific. So there hardly
    any need to come up with kernel driver for priority queueing. At least I
    don’t see it:)

-pro

----- Original Message -----
From: “Alberto Moreira”
To: “Windows System Software Devs Interest List”
Sent: Sunday, December 09, 2007 5:22 PM
Subject: Re: Re:[ntdev] 5 Tuples

> If you assume a Poisson distribution, a usable rule of thumb is that 90%
> of your accesses will take up to 3 average access times, while 99% of your
> accesses will take less than 5 average access times. Of course, the trick
> is to compute the access time, it may not be that simple.
>
> Alberto.
>
>
> ----- Original Message -----
> From: “Prokash Sinha”
> To: “Windows System Software Devs Interest List”
> Sent: Friday, December 07, 2007 5:08 PM
> Subject: Re: Re:[ntdev] 5 Tuples
>
>
>> This is really an interesting concept, though I don’t remember some of
>> the
>> detail. I often bumped into this kind of thing, perhaps because once I
>> did some mathematical modelling on this particular area, when adding
>> memory
>> to network controller was very expensive, but it is still applicabe in a
>> variety of performance related analysis

>>
>> First thing first, as Tim suggested “where is the beef?”. Did the OP (
>> including any associates) measured and found that there is a performance
>> problem right around the Queue(s) ? It is very very important first step
>> in
>> any performance oriented thinking ( barring the fact that some of the
>> easy
>> ones: Use interlocked instead of Spinlocks etc. etc. )
>>
>> Second, what Max proposed is one way to try ( that is to see if socket
>> level
>> options can be of any use to push the performances).
>>
>> Of course, there are others too. And for that telescoping is important,
>> to
>> narrow down the areas where performances could be a problem.
>>
>> From the Queuing perspective - When something is not known, try to base
>> on
>> the probablistic model. So in this particular case, if the avg. arrival
>> rate
>> is greater ( even with slight margin ) than avg. departure, long run
>> probabilty is that the machine would be flooded with packets, no amount
>> of
>> priority shifting going to help the overall system.
>> On the other hand, if the avg. arrival rate is less ( even with slight
>> margin) than avg. departure, long run probablity is that the queue would
>> be
>> empty. But, network traffic, and lot of other things follow Poisson
>> distribution, and network has an extra dimension - bursty
. And under
>> this
>> particular trait, network queues sees size differences and also follows
>> some
>> distributions. Now depending on the nature of the application(s), it
>> might
>> be necessary to exploit just that priority mechanism !!!
>>
>> Also ingress and outgress queues could see different Q lengths.
>>
>> But OP’s description of the problem is one of the very poor ( sorry )
>> description of the problem and ways of thinking to solve the problem.
>> There is TC api, GQos APIs etc., that with the combinations of LSP
>> providers in u-space and pkt scheduler in k-space might do a fantastic
>> job without bogging down to ( home grown )kmode packet queuing.
>>
>> -pro
>>
>> ----- Original Message -----
>> From: “Ray Trent”
>> Newsgroups: ntdev
>> To: “Windows System Software Devs Interest List”
>> Sent: Friday, December 07, 2007 1:51 PM
>> Subject: Re:[ntdev] 5 Tuples
>>
>>
>>> It’s a bit unclear what a “6 sigma average” would be, but “6 sigma”
>>> usually means one of two things:
>>>
>>> 1) A widely misunderstood, misleading, and inaccurate description of
>>> processes that have defect rates of < 3.4ppm (which is actually ±4.5
>>> sigmas on normally distributed random samples).
>>>
>>> 2) (Less than) Six times the standard deviation of a random data set
>>> (away from the mean). By Chebyshev’s inequality, 97% (1 - 1/(6^2)) of
>>> the samples will always be closer than 6 sigmas from the mean of any
>>> (non-degenerate) random distribution.
>>>
>>> In this case, it’s an overly fancy (but slightly amusing) way to say
>>> that either >99.99966% or 97% of the time the network packet queue
>>> length will be 1.
>>>
>>> I seriously doubt that, though, as I would expect that most of the time
>>> the network packet queue length would be 0 :-).
>>>
>>> Martin O’Brien wrote:
>>>> Tim:
>>>>
>>>> What, in a nutshell, is a ‘six sigma average?’
>>>>
>>>> Thanks,
>>>>
>>>> mm
>>>>
>>>> Tim Roberts wrote:
>>>>> A P wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> I plan to implement a network acclerator, by which all network
>>>>>> packets
>>>>>> going out from a particular application (user selected) would gain
>>>>>> priority over normal application packets. It involves packet queing
>>>>>> in
>>>>>> a driver.
>>>>>>
>>>>>> I have a design in mind, and I have the driver architecture, but I
>>>>>> want to know, is there a way of getting the process ID and 5 tuple
>>>>>> info from an APP in kernel (NDIS) without involving an LSP? I don’t
>>>>>> want to ues an LSP as it has so many cons.
>>>>>
>>>>> Do you have any performance research that suggests this will do any
>>>>> good
>>>>> at all? I would have guessed that the six sigma average network packet
>>>>> queue length was 1.
>>>>>
>>>>
>>>
>>> –
>>> Ray
>>> (If you want to reply to me off list, please remove “spamblock.” from my
>>> email address)
>>>
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer