Max DMA Fragments

I am trying to inform my driver when it is low on Transmit Buffer Descriptors (TBDs) to send Net Buffers in. Is there a max number of fragments that can be created by NdisMAllocateNetBufferSGList???

For every fragment, I store the data in that fragment (if there is any data in that fragment) into ONE TBD.

I need a way to know when I am running low on these TBDs.

No. There is no such thing you can specify.

I failed to understand why the driver needs to be informed on short of BDs.
A miniport driver should do the accounting itself. It (at least my drivers)
knows exactly how much is left at any moment while respective locks are
held.

If you are running short of BD, you could either:

  1. push the NB to internal SW send queue, bail out, then pick up from what
    is left over when TX comp dpc fires and DBs become available. OR
  2. coalesce all frags into one preallocate contiguous buffer assuming you
    have at least one BD left.

Calvin

On Thu, Oct 18, 2012 at 4:15 PM, wrote:

> I am trying to inform my driver when it is low on Transmit Buffer
> Descriptors (TBDs) to send Net Buffers in. Is there a max number of
> fragments that can be created by NdisMAllocateNetBufferSGList???
>
> For every fragment, I store the data in that fragment (if there is any
> data in that fragment) into ONE TBD.
>
> I need a way to know when I am running low on these TBDs.
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

my TBDs can only accept a certain size frame (nothing larger than around 1614 bytes if i remember right from the spec). I can easily keep track of my available TBDs, but I need a number of TBDs that represents the minimum allowed available to mark the driver as running low on TBDs.

I won’t know how many TBDs I need until the Net Buffer is fragmented

So here is the main problem

My send path looks like this

  1. Driver receives NBL from NDIS

  2. I loop through all NB in NBL and attempt to send them

  3. The NB is mapped by NDIS, and then sent by the "MiniportProcessSGList " function after it is called by the HAL later.

  4. In this "MiniportProcessSGList " I determine how many TBDs I will need to store the fragments passed to me. –> One TBD per fragment. If I don’t have enough TBDs, the NB needs to be queued.

–> This works, however, "MiniportProcessSGList " can be called well after the NdisMAllocateNetBufferSGList returns. In this case, other NBs get processed and mapped before this is returned.

I need a way to know how many TBDs I will need BEFORE I map the net buffers.

This should be fine. You can still queue the packet even its SGL has been
acquired. In fact, it’s always been this way until arrival of NDIS6.

Calvin
p.s. OTOH, I do hope they provide a way say, give me the god damned SGLs IF
YOU HAVE them now. or leave me alone. Don’t knock at my door, I will call
you. Oh well, I don’t have to deal with these anymore.

On Fri, Oct 19, 2012 at 9:53 AM, wrote:

> So here is the main problem
>
> My send path looks like this
>
> 1) Driver receives NBL from NDIS
>
> 2) I loop through all NB in NBL and attempt to send them
>
> 3) The NB is mapped by NDIS, and then sent by the "MiniportProcessSGList "
> function after it is called by the HAL later.
>
> 4) In this "MiniportProcessSGList " I determine how many TBDs I will need
> to store the fragments passed to me. –> One TBD per fragment. If I don’t
> have enough TBDs, the NB needs to be queued.
>
> –> This works, however, "MiniportProcessSGList " can be called well after
> the NdisMAllocateNetBufferSGList returns. In this case, other NBs get
> processed and mapped before this is returned.
>
> I need a way to know how many TBDs I will need BEFORE I map the net
> buffers.
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

Is that a bad design option in your opinion ->
2) coalesce all frags into one preallocate contiguous buffer assuming you have at least one BD left.

Not a bad design. You just have one more card in your pocket. In fact, it
can reduce latency. In small packet scenario, it’s a winner. I
remembered during some timeframe of NDIS5.1 days, the transport sent
packets with at least 4 fragments (eth, ip, tcp, payload). This makes very
significant performance hit on small packets for hw that doesn’t support
multiple RDMA requests. It still hurts even it does support it. The
overhead of PCI bus master arbitration/transaction was grossly overlooked
by software guys.

Calvin

On Fri, Oct 19, 2012 at 11:38 AM, wrote:

> Is that a bad design option in your opinion ->
> 2) coalesce all frags into one preallocate contiguous buffer assuming you
> have at least one BD left.
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>