All,
Thanks for your help in advance.
I am seeing some weird behavior in the throughput of our device under
certain situations.
1st scenario:
* If CPU 0 load is not high, then we get scheduled on it and our
throughput various dramatically.
2nd scenario:
* If CPU 0 load is high, we get scheduled on CPU 1 and our
throughput doesn’t vary that much.
So I guess my question is, Is there any way to always make sure we get
scheduled on CPU 1?
This is a NDIS driver, and I understand that the receive side is bound
to CPU 0, but if I can force transmit to be on CPU 1 then I think that
will solve the problem.
Any enlightenment for the community would be greatly appreciated.
Thanks again,
Michael
On 5/11/06 10:02 AM, “Smith, Michael G (Michael)” wrote:
> This is a NDIS driver, and I understand that the receive side is bound to CPU
> 0, but if I can force transmit to be on CPU 1 then I think that will solve the
> problem.
Depending on the miniport driver type (i.e. serialized vs. deserialized), on
the TDI clients, protocols, and IM drivers bound over you, on any crapware
that might have you hooked, etc., you might be in the context of any thread
on any CPU. I don’t think you can really exert much control over what CPU
you’re called on in your MiniportSendPackets handler. Even if you could, it
would mean that packets that are generated on CPU0 are forced into a queue
instead of sent immediately, which could very well hurt your performance as
much as it might help. Would you really have code to check which CPU you’re
on and then queue a DPC to the other one if the test came up wrong? Seems
like a perf killer to me, but I’ll admit that I’ve never tried/tested it.
-Steve
“Steve Dispensa” wrote in message news:xxxxx@ntdev…
> On 5/11/06 10:02 AM, “Smith, Michael G (Michael)” wrote:
>> This is a NDIS driver, and I understand that the receive side is bound to CPU
>> 0, but if I can force transmit to be on CPU 1 then I think that will solve the
>> problem.
>
> Depending on the miniport driver type (i.e. serialized vs. deserialized), on
> the TDI clients, protocols, and IM drivers bound over you, on any crapware
> that might have you hooked, etc., you might be in the context of any thread
> on any CPU. I don’t think you can really exert much control over what CPU
> you’re called on in your MiniportSendPackets handler. Even if you could, it
> would mean that packets that are generated on CPU0 are forced into a queue
> instead of sent immediately, which could very well hurt your performance as
> much as it might help. Would you really have code to check which CPU you’re
> on and then queue a DPC to the other one if the test came up wrong?
Using NdisGetCurrentProcessorCpuUsage or NdisGetCurrentProcessorCounts
this seems possible, but it’s not clear how to schedule a DPC on desired CPU.
Will NDIS queue timer DPCs to least loaded CPU?
NdisMQueueDpc allows to specify the CPU, but it exists only in NDIS 6.
–PA
On 5/12/06 12:38 PM, “Pavel A.” wrote:
> Using NdisGetCurrentProcessorCpuUsage or NdisGetCurrentProcessorCounts
> this seems possible, but it’s not clear how to schedule a DPC on desired CPU.
> Will NDIS queue timer DPCs to least loaded CPU?
> NdisMQueueDpc allows to specify the CPU, but it exists only in NDIS 6.
It’s not documented (to my knowledge) and wouldn’t pass WHQL.
-Steve