Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results

Before Posting...

Please check out the Community Guidelines in the Announcements and Administration Category.

More Info on Driver Writing and Debugging

The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.

Check out The OSR Learning Library at:

BSOD in WFP driver based on WDK inspect sample

ashish_kohliashish_kohli Member - All Emails Posts: 61
edited April 2022 in WINDBG

We have a WFP driver based on inspect WDK sample.
As in stack we can see mydriver!WFPCloneReinjectInbound+0x18c I am making an call to FwpsInjectTransportReceiveAsync0 function.

The current thread is making a bad pool request. Typically this is at a bad IRQL level or double freeing the same allocation, etc.
Arg1: 0000000000000007, Attempt to free pool which was already freed
Arg2: 0000000000001200, (reserved)
Arg3: 0000000000000000, Memory contents of the pool block
Arg4: ffffe00005c8e168, Address of the block of pool being deallocated

Debugging Details:

POOL_ADDRESS: ffffe00005c8e168






ANALYSIS_VERSION: 6.3.9600.17029 (debuggers(dbg).140219-1702) amd64fre

LAST_CONTROL_TRANSFER: from fffff8001d714f5c to fffff8001d5c38a0

ffffd000`218881c8 fffff800`1d714f5c : 00000000`000000c2 00000000`00000007 00000000`00001200 00000000`00000000 : nt!KeBugCheckEx
ffffd000`218881d0 fffff800`52303653 : 00000000`00000000 ffffe000`049b0500 ffffe000`049a1390 00000000`00000000 : nt!ExDeferredFreePool+0x6ec
ffffd000`218882c0 fffff800`53383455 : 00000000`00000000 fffff800`534fa6fd 00000000`00000000 00000000`00000000 : NETIO!NetioFreeMdl+0x232d3
ffffd000`21888310 fffff800`522d9142 : ffffe000`031e3500 00000000`00000001 00000000`00000000 00000000`00000000 : tcpip!FlpReturnNetBufferListChain+0x8b585
ffffd000`21888360 fffff800`522d53a2 : 00000000`00000000 ffffe000`049b05f0 00000000`00000000 ffffe000`050ee140 : NETIO!NetioDereferenceNetBufferList+0xb2
ffffd000`218883a0 fffff800`532fad53 : 00000000`00000000 ffffd000`21888400 00000000`00000000 00000000`00000000 : NETIO!NetioDereferenceNetBufferListChain+0x2e2
ffffd000`21888440 fffff800`532f9040 : fffff800`5344b180 ffffe000`050ee140 ffffe000`024e0000 ffffe000`024e0000 : tcpip!IppReceiveHeaderBatch+0x323
ffffd000`21888560 fffff800`533edd30 : ffffe000`03488bd0 00000000`00000000 00000000`00000001 00000000`00000000 : tcpip!IppFlcReceivePacketsCore+0x680
ffffd000`218888e0 fffff800`534fa2fd : ffffe000`04ae2902 ffffe000`02375c10 ffffd000`21888bb9 ffffd000`21883000 : tcpip!IppInspectInjectReceive+0x148
ffffd000`21888940 fffff800`1d52ef63 : 00000000`00000000 00000000`00000000 00000000`00000000 fffff800`534fa7c0 : fwpkclnt!FwppInjectionStackCallout+0xe5
ffffd000`218889d0 fffff800`5350b7ae : fffff800`534fa218 ffffd000`21888b40 00000000`00000010 ffffe000`03b32c70 : nt!KeExpandKernelStackAndCalloutInternal+0xf3
ffffd000`21888ac0 fffff800`52d0231c : ffffe000`03b32c70 00000000`00000000 ffffe000`049b0700 ffffe000`02e42650 : fwpkclnt!FwpsInjectTransportReceiveAsync0+0x2ea
ffffd000`21888c00 fffff800`52d026ed : ffffe000`050ee140 ffffe000`02e42650 fffff800`52d06e10 00000000`00000000 : mydriver!WFPCloneReinjectInbound+0x18c
ffffd000`21888c80 fffff800`1d571554 : ffffe000`03b33880 ffffe000`02e42650 00000000`00000080 00000000`00000001 : mydriver!WFP_AuthenticateThread+0x315
ffffd000`21888d40 fffff800`1d5c9ec6 : ffffd000`205ce180 ffffe000`03b33880 ffffd000`205da240 00000000`00005000 : nt!PspSystemThreadStartup+0x58
ffffd000`21888da0 00000000`00000000 : ffffd000`21889000 ffffd000`21883000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16

There are some observations which may help:

-> Happens sometimes when we pend packet at ALE AUTH RECIEVE ( INBOUND ) and then process packets in separate thread and then while reinjecting it deferences the NET_BUFFER_LIST.
Since this happens only sometimes,so when we try to deference BSOD happens.

-> The machine has NSClient++ installed.It is observed that when nscp.exe connects at port 5666 then at server process it is INBOUND at 5666 port and while reinjecting the packet it dereferences.After uninstalling NSClient++ this problem also happened though very infrequently.

-> I want to know under what conditions does derefernce happens so that I can skip dereference myself later for that particular case.

-> Searching through google I could find many such cases where WFP driver crashes similarly but everywhere the solution is just to uninstall the particular driver.
Post edited by Peter_Viscarola_(OSR) on


  • OSR_Community_UserOSR_Community_User Member Posts: 110,217
    That's eerie, because I saw a remarkably similar crash on my laptop yesterday:

    ffffd001`3026d1d8 fffff802`5c486f05 : 00000000`000000c2 00000000`00000007 00000000`00001254 00000000`00000000 : nt!KeBugCheckEx
    ffffd001`3026d1e0 fffff801`9b5991bf : ffffe001`c6da3890 ffffe001`c0dd8a70 ffffe001`cfe48080 00000000`00000000 : nt!ExFreePool+0x23d
    ffffd001`3026d2c0 fffff801`9b0a4149 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : NETIO!NetioFreeMdl+0x2707f
    ffffd001`3026d310 fffff801`9b571440 : ffffe001`c0c30b60 00000000`00000000 00000000`00000000 00000000`00000000 : fwpkclnt!FwppInjectComplete+0x59
    ffffd001`3026d350 fffff801`9ae868aa : ffffe001`c1e78250 00000000`00000001 00000000`00000000 ffffe001`cb655d90 : NETIO!NetioDereferenceNetBufferListChain+0x2d0
    ffffd001`3026d400 fffff801`9ae42e3d : ffffe001`c17e0370 00000000`01543498 ffffe001`c1e785b4 00000033`e929b7ac : tcpip!TcpFlushDelay+0x7a
    ffffd001`3026d490 fffff801`9ae41e29 : ffffe001`c1799af0 ffffd001`302669f4 ffffd001`302668f4 ffffd001`00000003 : tcpip!TcpPreValidatedReceive+0x3ad
    ffffd001`3026d580 fffff801`9ae41a22 : 00000000`00000000 00000000`00000002 00000000`00000000 00000000`00000006 : tcpip!IppDeliverListToProtocol+0x59
    ffffd001`3026d630 fffff801`9ae40e74 : 00000000`00000001 00000a80`00000000 ffffd001`00000018 00000000`00000000 : tcpip!IppProcessDeliverList+0x62
    ffffd001`3026d6a0 fffff801`9ae958a8 : 00000000`00003dfb 00000000`00000020 00000000`00000000 ffffd001`3026d828 : tcpip!IppReceiveHeaderBatch+0x214
    ffffd001`3026d7a0 fffff801`9ae95469 : 00000000`00000000 fffff802`5c290a5a fffff801`9afe9300 fffff801`9afed9c8 : tcpip!IppLoopbackIndicatePackets+0x1f8
    ffffd001`3026d820 fffff802`5c290925 : ffffe001`c451a080 ffffd001`3026d9e0 fffff801`9ae95360 00000000`00000000 : tcpip!IppLoopbackTransmitCalloutRoutine+0x109
    ffffd001`3026d890 fffff801`9ae3f154 : 00000000`00000000 00000000`00000000 00000000`00000002 fffff801`9afe9310 : nt!KeExpandKernelStackAndCalloutInternal+0x85
    ffffd001`3026d8e0 fffff801`9ae3e8c5 : ffffe001`c17dca78 00000000`00000000 ffffe001`c17dca78 ffffd001`00003b9a : tcpip!IppDispatchSendPacketHelper+0x5f4
    ffffd001`3026db30 fffff801`9ae3cbc8 : ffffd001`3026df00 ffffe001`c2a11040 ffffd001`3026df00 ffffe001`c17dca78 : tcpip!IppPacketizeDatagrams+0x2e5
    ffffd001`3026dc60 fffff801`9af88b6e : 00000000`00000000 ffffd001`3026df07 fffff801`9afe9310 ffffe001`c0e55ad0 : tcpip!IppSendDatagramsCommon+0x4b8
    ffffd001`3026ded0 fffff801`9b0a4d70 : ffffe001`c451a080 20000001`68f46907 fffff801`9b0a4cd0 fffff801`9b5718db : tcpip!IppInspectInjectTlSend+0x16e
    ffffd001`3026e000 fffff802`5c290925 : ffffd001`3026e1e0 ffffd001`3026e1e0 00000000`00000000 ffffe001`c773f1e8 : fwpkclnt!FwppInjectionStackCallout+0xa0
    ffffd001`3026e090 fffff801`9b0a66c6 : ffffe001`c15d1ac0 ffffe001`c0c30b60 ffffe001`c15d19a0 ffffe001`c15d1ba0 : nt!KeExpandKernelStackAndCalloutInternal+0x85
    ffffd001`3026e0e0 fffff801`9b0a4c8e : 00000000`00000007 ffffd001`3026e220 ffffd001`3026e220 ffffe001`c0c30b60 : fwpkclnt!NetioExpandKernelStackAndCallout+0x52
    ffffd001`3026e120 fffff801`9b0a6393 : ffffe001`00000000 ffffe001`c6f844e0 ffffe001`c1c24a90 00000000`00003dfb : fwpkclnt!FwppInjectTransportSendAsync+0x552
    ffffd001`3026e320 fffff801`9bb3a85f : 00000000`003bb953 fffff801`9b22e766 00000000`00000000 ffffe001`cb506ba0 : fwpkclnt!FwpsInjectTransportSendAsync0+0x63
    ffffd001`3026e390 fffff801`9bb3f55d : ffffe001`c1c24a90 00000000`00000000 00000000`046eedb8 ffffe001`c3d78fc0 : vsdatant+0xa85f
    ffffd001`3026e420 fffff801`9bb5410e : 00000000`8400008f 00000000`00000000 ffffe001`c1c24a90 00000000`00000000 : vsdatant+0xf55d
    ffffd001`3026e4d0 fffff801`9bb54ed8 : ffffe001`c784c140 ffffe001`c7836f00 e001c79a`40607aa3 00000000`0012019f : vsdatant+0x2410e
    ffffd001`3026e680 fffff801`9bb54f2c : 00000000`00000001 00000000`00000000 ffffe001`c79a4090 ffffd001`3026ea80 : vsdatant+0x24ed8
    ffffd001`3026e720 fffff802`5c6516b3 : 00000000`00000000 ffffd001`3026ea80 ffffe001`c79a4090 ffffe001`c451a080 : vsdatant+0x24f2
    ffffd001`3026e750 fffff802`5c650456 : e001c443`3f30bd55 00000000`001f0003 00000000`00000000 00000000`00000000 : nt!IopXxxControlFile+0x1253
    ffffd001`3026e920 fffff802`5c36cb63 : 00000000`00000000 00000000`00000001 00000000`00000001 fffff802`5c64de00 : nt!NtDeviceIoControlFile+0x56
    ffffd001`3026e990 00000000`6bca1e52 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x13
    00000000`00e7f0e8 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : 0x6bca1e52

    My stack isn't identical, but it is quite similar. In my case, I suspect it is related to the (just released) VPN client software, but I don't have a smoking gun yet (though the fact it hasn't happened since I got this crash dump because I haven't used the VPN client *is* a bit of a smoking gun.)

    Since I am not sure exactly what's going wrong yet, I enabled driver verifier: special pool, pool tracking, irp logging and I/O verification. I did this on the drivers that were on the stack so that the next time this happens I'll get more information. I'd suggest you try something similar and see what you find.

  • ashish_kohliashish_kohli Member - All Emails Posts: 61

    This issue is reproducible on only one machine and mostly by nscp.exe 5666 INBOUND.
    disabling nscp.exe it still happended on svchost 3389 but after long time ( 3 days )

    According to my observation and analysis though clone nbl is passed to FwpsInjectTransportReceiveAsync0 but in some cases original nbl is dereferenced.

    So in race conditions BSOD may also happen when we try to dereference it later.
    Or if we have already dereferenced then it may happen as I have enclosed the dump.

    I am unable to find root cause why in some cases original nbl is dereferenced.

    probably the decision happens in tcpip!IppReceiveHeaderBatch but could not decipher what.
  • sungjesungje Member Posts: 12


    Check the IPSec.
    If it's IPSec Packet, Skip or Reconstruct.

    To allow IPsec to process inbound packets first, the callout that inspects the transport layer data must have a lower value of subLayerWeight in the FWPS_FILTER0 structure than the universal sublayer. In addition, the callout driver must not intercept tunnel-mode packets for which the combination of FWPS_PACKET_LIST_INBOUND_IPSEC_INFORMATION0 members ( isTunnelMode && ! isDeTunneled ) is returned by the FwpsGetPacketListSecurityInformation0 function. The callout driver must wait for the packet to be detunneled and then should intercept it at the transport layer or at a forward layer.

    if (packet->ipSecProtected)
     // When an IpSec protected packet is indicated to AUTH_RECV_ACCEPT or 
     // INBOUND_TRANSPORT layers, for performance reasons the tcpip stack
     // does not remove the AH/ESP header from the packet. And such 
     // packets cannot be recv-injected back to the stack w/o removing the
     // AH/ESP header. Therefore before re-injection we need to "re-build"
     // the cloned packet.
    status = 
    0 );
    if (!NT_SUCCESS(status))
    goto Exit;
  • Peter_Viscarola_(OSR)Peter_Viscarola_(OSR) Administrator Posts: 9,160

    /roll eyes

    You know you're replying to a six year old post, right?

    Thread locked.


    Peter Viscarola

This discussion has been closed.

Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Upcoming OSR Seminars
OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!
Kernel Debugging 13-17 May 2024 Live, Online
Developing Minifilters 1-5 Apr 2024 Live, Online
Internals & Software Drivers 11-15 Mar 2024 Live, Online
Writing WDF Drivers 20-24 May 2024 Live, Online