Spin Lock

That’s one way to do it, but that’s not what the article you quoted does. That article has the kernel driver allocate the memory, and then map that memory into the user-mode process. Doing it that way is required if you need physically-contiguous memory for DMA, for example.

Yet another way is to have the application send down a METHOD_IN_DIRECT ioctl very early on, with the desired buffer as the second buffer in the ioctl, and then have the driver keep that ioctl pending for a long time. That way, the I/O system handles the mapping of the memory, and keeps it mapped as long as the ioctl is pending.

The very long pending IRP is a much better design choice. The content of whatever memory buffer you share is entirely up to you.

but consider that whatever mechanism you invent, it is unlikely to be more efficient or effective that standard ReadFile / WriteFile or DeviceIOControl calls. The shared memory design has advantages only in very specific use cases and you should be sure that yours is one before you go to the significant effort to implement a scheme like this.

If your application can tolerate lost, duplicated and corrupted data, then shared memory is easy to implement. If not, then you will end up re-implementing the standard calls

Thanks

Can I send to the user program the chain of MDLs (from cloned chain of NBLs) via the one Irp->MdlAddress.
Can I complete the MDL from Irp->MdlAddress received from user(I/O manager) and change that address with new one from newly cloned MDLs extracted from send path of NDIS filter driver. Need I lock cloned MDLs before returning to the user. Thank You vary much.

The short answer is no you can’t do this and don’t want to try.

The long answer is much longer.

With respect, I suggest you take some training and / or do more research. The questions that you are asking indicate a lack of fundamental knowledge in some areas, and it is not a good idea to attempt advanced / non-standard work without a solid understanding of the basics

Thank you. I will copy the entire payload from NBLs into one MDL from Irp and return to the user. If this is not a good way, please let me know

How I already said, I am allocate a buffer 16000byte length in user program, full it, and call DeviceIoControl with Direct I/O. The system convert it to a MDL and pass to the NDIS filter driver. The driver, in his dispatch routine converts it to the NBLs and send to net. In the same time I queue that IRP and complete with STATUS_PENDING flag. After when NDIS calls my FilterSendNetBufferLists, I take that IRP from queue fills the MDL from IRP and return it to user with new network payload via the IoCompleteRequest. It works good some time (last time 2 days). But sometimes turn out, that the integral length of All (MDLs * NBs) is greater than 16000; I can increase that user buffer but it is not good way to do. What can You advise me.

you should queue multiple IRPs at the same time. after one is completed, data that arrives will have another one ready

thank You. Mr. MBond2. I have done it. But I wanted to send with one Irp. Because for splitting to many NBs and sending every NB separatory will decrease the speed.

There may be a limit on the total length of the entire payload. I mean all NBs from all NBLs. Perhaps this is a property of the network card. How can I request a card

yes, you can find out the MTU on the interface and make sure your buffer is larger than that. For most hardware, the largest jumbo frame is around 9,000 bytes

but what I mean is not that you should split the data for a single packet into multiple buffers, but that you should use OVERLAPPED IO and send multiple buffers to the driver when there is no data. Then as data arrives, fill the first buffer and send it back to UM. After the UM code runs, send another buffer back to the driver. the point is that because the speed at which the KM code can detect packets, and the speed at which UM code can process them mismatch, you want a queue of pending buffers to even out spikes

Thank You.

Great thank you. Mr Mbond. But what I’m trying to tell you is that everything you say I’ve already released in the last 6 months. Please help me solve a more difficult problem. I want to free the buffer assigned to Irp->mdlAddress when the total length of the entire network payload from the NDISFilterDriver send path is greater than this buffer, allocate a new one with enough length and attach it to the Irp instead of the old buffer.

May be to allocate extra buffer and add it to the Irp->mdlAddress as a new PFN. And provide access to it from user mode.

I want to free the buffer assigned to Irp->mdlAddress…

That buffer does not belong to you. You can create a NEW buffer to send down below you, but you’ll need to restore the original during completion.

And provide access to it from user mode.

You’ll copy the data into a pending request that the UM sent down to you earlier.

Well, Mr. Tim_Roberts. But UM sent me the buffer described by IRP->MDLAddress, which did not have enough space for the entire payload. Sometimes I need to increase this space.

Somewhere I red that it is possible to add new mdl to the already existing one(to build mdl chain). But I cant imagine, how I/O manager will convert it to user mode buffer, when it returns to user.

Yes. I can split to many NBs and return them via the many IRPs but it is not very good solution for me.

Sometimes I need to increase this space.

Well, that’s simply impossible. Surely that must be obvious. The UM app allocated that space from its heap and has the address stored in its own pointers. You can’t arbitrarily extend that space, because there is almost certainly no room in his heap, and you can’t allocate your own space at a new address, because you don’t know where they might have stored the pointer. The IRP is merely telling you where the buffer is. Changes to the IRP are not reflected back to UM.

If there’s not enough room, you need to fail the IRP and have the app send you a larger buffer. There are NTSTATUS codes specifically for this condition.

Yes, you can allocate a new MDL and replace the one in the IRP, but that’s only for requests that are travelling DOWN the stack towards the hardware. In that case, YOU become the client, so you are in control. You do not have the option of modifying the request on the way back up. No one is looking at that.

Thank you Mr. Tim_Roberts. I now will split it into many NBs and return from driver with multiple IRPs. Thanks for your many helpful tips.