> ‘never do what the samples do’.
That’s not really the moral here… xframeii is a “real world” driver that supports a gazillion features. One of those features is that its hardware can (in some cases) send multiple packets per TX descriptor, which results in the complicated queuing you see in the driver. We’re showing it all to you, warts and everything. It’s like your opportunity to see what one of the “big guys” does. (It even has an OS abstraction layer, although the kit only includes the Windows bindings…). Of course, for simple drivers, it’s way overkill, which is why I recommend netvmini as a first step in learning. (Another one of netvmini’s advantages: it doesn’t require two US$5000 network adapters before you can start playing with it.)
when do these these packets go out?
Well, in my example, it’s your driver’s job to program a DMA request into your hardware. Usually, you’ll just write some memory addresses into the hardware-visible portion of a TX descriptor, and then poke some register that kicks the hardware into packet-sending action. When the hardware has completed sending the packet (or actually, has completed copying the packet out of main memory and into its own little TX buffer), it will trigger an interrupt. Your MiniportInterrupt (or MiniportMessageInterrupt) handler sees the interrupt, and (using some hardware-specific mechanism) divines that the transmit has completed. Since your hardware probably transmits one NB per TX descriptor, you have to do a little bit of bookkeeping to decide whether that NB was the last NB on the NBL. If so, you complete the NBL back to the OS.
what if there’s no RX traffic at all (no ints) nor a ‘TX done’ int either ?
Well if your hardware really doesn’t have any interrupts for “TX done”, then you get to poll. (Hopefully your hardware at *least* writes transmit status into a register somewhere that you can poll, or you really need to go have a chat with your silicon people!)
This use case is not all that contrived. A ‘send only’ app (stock quotes) comes to mind.
Right – any driver design that depends on RX traffic to keep the TX path moving along is broken. (RX is not a clock source for TX). I’ve debugged cases where something like this happens (by accident) and customers are *not happy* with what appears to be unreproducable and random failures.
By the way, all the discussion here applies only to pure NDIS miniports with physical hardware. IM drivers and NDIS-WDM miniports will do slightly different things at the bottom edge of their send path (e.g., they won’t call NdisMAllocateNetBufferSGList, and they won’t have a MiniportInterrupt handler).
-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@live.com
Sent: Sunday, October 31, 2010 10:56 PM
To: Windows System Software Devs Interest List
Subject: RE:[ntdev] Query about deserialized NDIS miniports
Wow. Thanks for taking the time, Jeffrey.
Hmm… so ‘never do what the samples do’.
kidding! could not resist that 
On a more serious note:
PumpQueuedPackets() :
atomically acquire TX descriptor and pop next NB
if no TX descriptor or no NB :
return from PumpQueuedPackets
what now…?
when do these these packets go out? what if there’s no RX traffic at all (no ints) nor a ‘TX done’ int either ?
This use case is not all that contrived. A ‘send only’ app (stock quotes) comes to mind.
NTDEV is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer