RE: TDI client driver Vs. user mode sockets applicati on

That’s what should happen with Winsock Direct, but in this case it is the
user-mode LSP that pre-posts direct to the NIC and by-passes most of the
kernel layers.

I haven’t heard of an enhancement for AFD to pre-post buffers, but I’ve
been out of the loop on such things for a while. Certainly, overlapped I/O
receive buffers get pre-posted and from Win2K onwards you don’t have to
remember to set the receive window to zero, the stack will do direct
placement into the pre-posted buffers if they are present, eliminating the
memcpy() from the window buffer.


At 20:50 05/12/2002 +0300, Maxim S. Shatskih wrote:

Am I wrong that, in some scenarios, AFD will pre-post its buffers to
TCPIP, and BSD recv() is just memcpy() from AFD’s buffers to app’s one?


----- Original Message -----
From: mailto:xxxxxMark S. Edwards
>To: mailto:xxxxxNT Developers Interest List
>Sent: Thursday, December 05, 2002 10:40 AM
>Subject: [ntdev] RE: TDI client driver Vs. user mode sockets applicati on
>The APIs are going to change quite dramatically in the near future when
>the only way to run at 10Gb/s is with an offload engine. There’s a lot of
>work going on in this arena, especially in the IETF, Microsoft and other
>major vendors.
>Microsoft’s Winsock Direct is a start, but it introduces inefficiencies by
>mapping traditional Winsock over InfiniBand like RDMA semantics. For
>maximum efficiency, you’d want to use SDP directly.
>To use an offload engine or some of the proposed RDMA functionality to
>greatest efficiency requires the ability to pre-post large numbers of
>buffers and have asynchronous notification. Winsock with it’s overlapped
>I/O and asynchronous completion methods is probably quite adaptable to the
>required semantics. But the old BSD sockets is hopeless for it. Linux
>and Unixen are going to have their work cut out to make some very major
>changes in the networking stacks in the very near future.
>That’s not to say that you couldn’t continue to use a BSD sockets mapping
>over an offload engine, just that it wouldn’t be very efficient or would
>require a card with more memory than the vendors are looking to use (i.e.
>more expensive). Many vendors in this space are looking to produce cards
>with effectively zero memory and do direct data placement into the
>pre-posted buffers. In this scenario the BSD sockets APIs being unable to
>pre-post buffers would have an interesting effect on the TCP window and
>other things.
>At 15:37 04/12/2002 -0800, Bi Chen wrote:
>>BSD socket is inefficient to be used on server that must handle large about
>>of concurrent TCP connections or UDP requests on Windows Platform because
>>it lacks of nonblocking or overlapped caps. One can use select to relief the
>>problem somewhat, but not much. In any cases, there will be heck lot of
>>thread context switching and using a lot of threads. Using Overlapped
>>WinSocket along with IO completion port and/or OS (Windows) provided thread
>>pool, saves all those boatload of overhead.
>>On Linux, it is a bit different story. However, Liunx community, especially
>>Oralce realize that blocking BSD socket and lack of asynchronies IO is hugh
>>drag in their quest to beat server performance of Windows. They are adding
>>those in 2.5 kerenl.
>>-----Original Message-----
>>From: Maxim S. Shatskih
>>Sent: Wednesday, December 04, 2002 2:53 PM
>>To: NT Developers Interest List
>>Subject: [ntdev] RE: TDI client driver Vs. user mode sockets applicati
>>RE: [ntdev] RE: TDI client driver Vs. user mode sockets applicationIn
>>what respect namely the BSD socket model is inefficient?
>> Max
>You are currently subscribed to ntdev as:
>To unsubscribe send a blank email to %%email.unsub%%
>You are currently subscribed to ntdev as:
>To unsubscribe send a blank email to %%email.unsub%%</mailto:xxxxx></mailto:xxxxx></mailto:xxxxx>