Hello,
I started a previous thread concerning compressing the userbuffer in an IRP from the higher driver before passing on to the TCP driver. It was recommended that I instead create and use a substitute mdl. I did that and that worked! Thanks. However, I still have a secondary problem of performance.
On the same machine the performance is great. But if I connect from a remote client then the compressed packets are sent across the LAN extremely slowly. It takes like I’d say 17 to 20 times longer to transmit the same number of packets. It appears that the TCP driver is timing out before sending on every packet I’m guessing. And in actuallity it may be the higher driver that apparently is not sending any further asynchronous TDI_SENDs until the previous one completes. That is probably the real problem. IF the higher driver would send further packets then some TCP buffer would probably max out and get sent. TDIMon shows the 4K buffer sends, but occuring at a very slow rate.
Again, on the local machine it is fast. The higher driver continues to send multiple TDI_SEND ioctls, not waiting on previous ones to complete which is the expected behavior. Jsut remote clientit waits.
The strange part is that if I instead do the compression in the application and make the driver just a pass through then the performance issue goes away and the TCP traffic is fast. I can see in my driver that the packets arrive compressed and look just liek they do when I instead allow my driver to compress and substitute the mdl.
Anyone have any idea as to what would cause this behavior?