Sounds like a wireshark bug. Seems like there was a conversation on this list in the last year about wireshark causing some perhaps similar crash. I’d suggest you search this list for wireshark, and see if anything sounds similar.
Did you Google the issue of sharing MDLs, I instantly found (Google ndis partial mdl) http://www.osronline.com/showThread.cfm?link=147694 from 2009 on the Windows Xen driver? It pointed out that NdisCopyBuffer is IoAllocateMdl+IoBuildPartialMdl
Typically, there will be one MDL per receive NB, as if it were a real receive buffer. NDIS did have data/body split for a while, which put the headers and body in different MDLs, and different buffers. On the receive path, there is a requirement that the NDIS specified lookahead size bytes be in the first MDL of the receive NB. This is so upper layers can parse the packet headers without having to deal with disjoint chunks in multiple MDLs. NDIS is free to set the lookahead size for a NIC miniport to anything up to the MTU, although don’t believe it typically does. I’ve recently seen it being set to a value around 550 bytes (don’t remember exactly) but do remember I rounded it up the next cache line size 576 from what NDIS was setting.
I know there is a bug in the WHQL test that fails if the NB offset is not zero on receive packets, and if you are sharing an MDL across many packets, the NB offset will have to not be zero on some packets. I also know on the send side into a NIC miniport, WHQL tests share memory buffers (and maybe MDLs) for different packets. On the send side, buffers and MDLs need to be treated as read-only (also note the bit in the NB that specifies a packet as read-only may not be set). The only workaround I could find for this WHQL bug was to adjust the MDL to always use an NB offset of zero. If you share MDLs, that won’t be possible.
It offhand seems like there may be an issue of concurrent updating of a shared MDL. The MDL you get from the write may not have a kernel address mapping, and upper level protocols will need to access a virtual address of the buffer. If for example you indicate some NBs, and some get processed on a different core in a worker thread, you may then have multiple cores needing to create a kernel address mapping, and if those happen at the same moment, you have two cores writing to the MDL, with no locking. You might mitigate this by assuring the MDL is mapped before indicating it. A second issue is things like filter drivers might want to insert/remove bytes in the packet, and they sometimes do so by manipulation of the MDL chain. Filters are supposed to restore a NB back to its original state before returning a packet, but it offhand seems like while they own the packet they are free to manipulate it however they want, which might include manipulating the MDL. For the typical case of received packets with non-shared MDLs, this will not cause issues, but in your case, it will. I’d say the read-only flag in the NB could be set in the NB if it’s read-only, but my experience has also been the read-only bit was not set on definitely read-only packets.
I can’t absolutely say it’s illegal to share MDLs across receive packets, but it does tread on a path that is uncommon, and anytime you go off the path you might find a new bug, in OS or other ISV code.
If it were me, I think I’d preallocate MDL memory (largest size needed, allocated perhaps in the NBL context), and fill it in with IoBuildPartialMdl, which should not require any allocations or page table manipulation, just some copying/adjustment of the source MDL into the destination MDL. Each indicated NB could then have a unique MDL, which gets reused when the NBL/NB comes back. So your conversion from the write to the NBL indicate goes something like:
MmGetSystemAddressForMdlSafe on the write mdl to assure it has a mapping, so the partial mappings will use the same mapping
While (there are more packets to indicate in this write) {
Pull an NBL/NB with embedded MDL from the free pool (if pool is empty, do something appropriate)
Increment a reference count to the write
Use IoBuildPartialMdl to create a slice of the write buffer
Set appropriate NBL and NB fields for this packet
Put this NBL at the end of a chain to indicate
}
Indicate chain of NBLs
When you get the NBLs back, you just need to put it back on the free list, and decrement the reference count to the original write. When the reference count goes to zero, you can complete the write.
Jan
From: on behalf of Menachem Shapira
Reply-To: Windows List
Date: Tuesday, May 23, 2017 at 11:01 AM
To: Windows List
Subject: [ntdev] NetBufferList chain using same MDL
Hi,
We have a Virtual NDIS Miniport driver that receives a chain of L2 Frames from a User Mode application.
It passes the Head of the chain (a constant 65K buffer) to the driver to it’s IRP_MJ_WRITE routine (we’re using NDIS_WDM mode…)
The driver in turn builds a list of NBLs, each NBL describing a certain frame from the large buffer. Practically, we don’t use any more that 10 frames for each indication.
In order to conserve in memory mappings, we decided to use the same MDL created by th I/O manager for the IRP, and have each NB point to a different chuck of the 65K buffer.
1) Is the sharing of MDL permissible in such a constellation?
2) This solution seems to work, in practice. We achieve nice throughputs. However, when we attach an upper fitler driver (just run wireshark, really, which attaches winpcap), we get a bugcheck in tcpip.sys (see below). Looks like some bad pointer. Additionally, we see that our MDL has been meddled with. And here was I thinking that the MDL should describe a constant block of memory, and other driver should “look, but not touch”.
3) If my approach is not acceptable, what better approach can be suggested?
Summary of the bugcheck:
1: kd> !analyze -v
Bugcheck Analysis
DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
…
Arguments:
Arg1: 00000006, memory referenced
Arg2: 00000002, IRQL
Arg3: 00000000, value 0 = read operation, 1 = write operation
Arg4: 89ac75aa, address which referenced memory
…
STACK_TEXT:
…
92438360 89ac75aa 0000000a 00000006 00000002 nt!KiTrap0E+0x1ca
92438498 89ac7339 8e5c60e4 924384e4 00000000 tcpip!Ipv4pValidateNetBuffer+0x2a
92438514 89acaa84 00000000 00000000 00000002 tcpip!IppFlcReceivePacketsCore+0x259
92438584 89acade8 89ae5500 91553000 b9142395 tcpip!FlpReceiveNonPreValidatedNetBufferListChain+0x214
92438640 81ea3e73 924386b4 70147e0b 00000001 tcpip!FlReceiveNetBufferListChainCalloutRoutine+0xd8
92438678 81ea3dbe bd6422c0 00000002 01000000 nt!KeExpandKernelStackAndCalloutInternal+0xa3
9243868c 89ae5988 89acad10 924386b4 00002400 nt!KeExpandKernelStackAndCalloutEx+0x1e
924386d4 8963769c 00628650 beff2e01 00000000 tcpip!FlReceiveNetBufferListChain+0x98
9243874c 896373b9 00000000 00000002 00000000 ndis!ndisMIndicateNetBufferListsToOpen+0x26c
92438798 8964dfdc 8e6370e8 beff2ef0 00000000 ndis!ndisMTopReceiveNetBufferLists+0x209
924387fc 8965ba91 00000002 00000000 8e6370e8 ndis!ndisInvokeNextReceiveHandler+0x25
92438844 89ce1159 cbd94de8 beff2ef0 00000000 ndis!NdisFIndicateReceiveNetBufferLists+0x1eb81
924388f4 89638c97 97066a88 beff2ef0 00000000 wfplwfs!LwfLowerRecvNetBufferLists+0x149
92438948 8964c2a9 cbd94de8 89ce1010 97066a88 ndis!ndisCallReceiveHandler+0x287
9243896c 81ea3e73 92438a74 701471d7 8e6370e8 ndis!ndisDataPathExpandStackCallback+0x27
924389a4 81ea3dbe bd6422c0 00000000 00000000 nt!KeExpandKernelStackAndCalloutInternal+0xa3
924389b8 89642605 8964c282 92438a74 00002666 nt!KeExpandKernelStackAndCalloutEx+0x1e
924389d0 896595dd d1768f20 8fb110e8 8fb11030 ndis!ndisExpandStack+0x11
92438ab4 90397e22 8e6370e8 beff2ef0 00000000 ndis!NdisMIndicateReceiveNetBufferLists+0x2171d
92438b10 9039307d d1768f20 00000000 92438b30 mydriver!Receive_DeviceWrite+0x562 [myfile.c @ 305]
92438b38 8969d1e4 8fb11030 d1768f20 d1768f20 mydriver!VpnDeviceWrite+0x1d [myfile.c @ 558]
— NTDEV is sponsored by OSR Visit the list online at: MONTHLY seminars on crash dump analysis, WDF, Windows internals and software drivers! Details at To unsubscribe, visit the List Server section of OSR Online at