Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results
The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.
Check out The OSR Learning Library at: https://www.osr.com/osr-learning-library/
Or, as I said, find your oem##.inf in C:\Windows\Inf. That can also be used to uninstall. Your driver registry keys include an entry that has your assigned INF name.
Unfortunately, our testers don't agree with Scott -- they don't want to ignore this warning. :-)
I finally figured out what was going wrong in our case -- we weren't leaking anything.
As it turns out, Filter Verifier walks the stack and tries to match the address of the calling instruction to a filter.
FLTMGR!FltvFreeGenericWorkItem excerpt: fffff800`21d803e1 48ff154836fdff call qword ptr [FLTMGR!_imp_RtlGetCallersAddress (fffff800`21d53a30)] fffff800`21d803e8 0f1f440000 nop dword ptr [rax+rax] fffff800`21d803ed 488b4c2438 mov rcx,qword ptr [rsp+38h] fffff800`21d803f2 e809e2ffff call FLTMGR!FltpvGetFilterFromCallerAddress (fffff800`21d7e600)
In our case, the compiler optimized the call instruction for a jmp instruction, which erases our driver off the stack. Filter Manager cannot not find our filter and so we're left with an outstanding work item.
224 fffff800`1e219b36 4883c430 add rsp,30h 224 fffff800`1e219b3a 5f pop rdi 223 fffff800`1e219b3b 48ff25f6440300 jmp qword ptr [Parity!_imp_FltFreeGenericWorkItem (fffff800`1e24e038)] Branch
This can only happen if FltFreeGenericWorkItem is the last line of the function so I re-ordered the function to make our "leak" magically disappear :-)
That's not what the _Id parameter is for. The NetBufferListInfo member of the NET_BUFFER_LIST structure is basically a big anonymous structure that contains different things at different times. The _Id parameter specifies which element of that structure you want to read or write. If you look at <ndis.h> in "typedef enum _NDIS_NET_BUFFER_LIST_INFO", that is the list of things you can fetch, by passing one of the enums as the _Id parameter.
Note that I'm not an NDIS guy -- I was able to figure this out by reading the .H file. Nothing beats going straight to the source.
Hello! Can anybody, please help me with my problem.
I decided to write a driver that will map zero virtual address to 0 physical page. To do this, I wrote dword 0x00000001 to VA 0xC0000000. After it, i check result:
kd> dd 0xC0000000
c0000000 00000001 00000000 00000000 00000000
kd> !pte 0
VA 00000000
PDE at C0300000 PTE at C0000000
contains 2FA62867 contains 00000001
pfn 2fa62 ---DA--UWEV pfn 0 -------KREV
kd> !db 0
check VA 0:
kd> db 0
everything works in the kernel debugger. Now 0 VA is mapped to 0 physical address. I get the same result if I attach to program through the user mode debugger in guest vm. Its mean, that it work! But this only works in the debuggers, because, after that, when my program tries to read 0 and to get 0x53, it throws an exception, although now it should not be.
Tell me, please, what could be the problem? Could it be TLB? How i can view tlb from windbg?
P.S I'm testing on a Windows 7 x86 without PAF, for simplicity.
UPD: i solv it. I forget to set U/S fkag
First you can encrypt in the kernel, the operating system even has calls for the common model it provides in user space. If you really need to send it to user space look up "inverted call" that is well documented on the OSR site see https://www.osr.com/nt-insider/2013-issue1/inverted-call-model-kmdf/
Just FYI to confirm that it's my understanding too that the SHA-2 support is -- and remains -- a separate post-SP1 update, "KB3033929 Windows 7 SP1 SHA-2 Support". While we're talking about it, I'll also give a shout-out to "KB2921916 Windows 7 Publisher Verification Prompt SHA-256", if you find your "always trust software from xxxxxx" selection isn't working after applying SHA-2 support.
I don't know definitive answers to the "what does Microsoft do for SHA-1", but the answer USED to be that Microsoft still provided an SHA-1 signature when you're testing and submitting for an SHA-1 dependent platform like Windows Server 2008. (Which will be an HCK testing process, not HLK.) You didn't get it "regardless of platform", but did get the signature required for the platform you were tested and signing for.
I say "used to be the answer" since SHA-1 has been deprecated since then, and I don't know what Microsoft might do or no longer do. Nor do I perform HCK testing of any drivers myself to have encountered what this answer might be. If HCK testing and signing for these down-level platforms is still allowed, it would seem like Microsoft "has to be" still providing an SHA-1 signature. But who knows.
Regarding what will happen to the existing signature on individual binary files, at least in the Attested Signing process it would append Microsoft's signature to whatever signature was already on the binary. We directly relied on this behavior with our product, and knew it was true.
But, that "existing signature in the binary" was also a Microsoft cross-signing signature, and Microsoft cross-signing has also been deprecated at this point. So it's not possible to submit a binary with a pre-existing kernel-policy signature any more, since you can't cross-sign your binary. What Microsoft's process might do with an unverifiable self-signed certificate signature, or other private CA certificate signature, or what they might do with any non-kernel-policy signature (non-cross signed, normal Authenticode signature) on the binary file is unknown to me.
That's maybe the final thing to point out: If you're getting a Microsoft SHA-2 signature on your signed binary files, you want to NOT SIGN those binary files before submission to Microsoft if you want to have any chance of those files being used on an SHA2-compatible down-level platform. The down-level platforms do not successfully traverse "what is the SECOND signature on the binary file", and will be stuck looking only at whatever your non-Microsoft first signature was.
Oh okay I think I understand, passing or using any sort of floats in a 32bit system needs to be done in protected region not just doing calculations on it.
Right. The issue is not executing floating point instructions, the issue is holding stuff in floating point registers. The 32-bit systems do not automatically save and restore the floating point registers on every kernel thread context switch, so if you tweak a register without following the rules, you could screw up some other thread.
… and the warnings go away?
Back in the days of XP (or possibly even NT4 SP2 - maybe even 3.7) someone thought it would be a cute way of saving nanoseconds and NPP(*) to allocate a FO on the stack to streamline some operations (I am sure someone out there will remember which but I would guess it was in a fastio path equivalent to the stat(3) callbacks we now have).
This worked really well until file system filters (no mini filters in those days) started doing real object manipulations on them like referencing them, and dereferencing them later. At that stage the dereference would turn into a decrement of a random bit of stack which would usually cause a series of difficult to diagnose crashes.
So people got into the habit of saying ‘if this FO is on the stack keep clear’. Hence that code. I haven’t seen that since Vista or maybe Win7 but, given the difficulty of debugging this sort of crash, the code hangs around.
It might be interesting to put in a PR to pull that code and see if it gets accepted- that would be a pretty clear indication of whether the code really has been expunged.
R
(*) Remember NT had to boot in 64Mb of physical memory and run on what now would appear to be slow processors so allocating pool had a real and (critically) measurable cost. Hence tricks like this had real value and were done within a reasonably ‘rigorous’ engineering process (his initials were DC and it might even have been his idea) so it’s not a daft as it might sound now -certainly les strange than some things you see in an active kernel these days.
I think that your confusion is that you expect KeSaveFloatingPointState to protect the code that does computations with floating point variables, but really it needs to protect all code that uses any floating point variables (including return codes)
Any stack frame that has a float or double local variable or return code (or accesses any float or double global) needs to use the floating point registers to move those values around. Historically, those registers were not preserved during certain context switches to improve performance. So executing code that uses them will clobber values that are expected to remain invariant between instructions that are interrupted by a KM thread - causing random errors in whatever process got corrupted. Explicitly saving those values via KeSaveFloatingPointState and restoring them after any possible use of those registers has completed solves this problem. But they have to be saved before any possible access and restored after any possible use