ZwClose + stack overflow?

Hello,

I’ve been battling a stack issue for a couple of days now and I’ve
figured out what is causing the problem. I noticed in my stack trace
that after calling ZwClose on a file, my stack trace looks like
something below:

Ntfs!_SEH_prolog+0x1a
Ntfs!NtfsPrepareBuffers+0x270
Ntfs!NtfsNonCachedIo+0x20e
Ntfs!NtfsCommonWrite+0x1821
Ntfs!NtfsFsdWrite+0xf3
nt!IopfCallDriver+0x31
fltMgr!FltpDispatch+0x152
nt!IopfCallDriver+0x31
sr!SrWrite+0xaa
nt!IopfCallDriver+0x31
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
fltMgr!FltpDispatch+0x11f
nt!IopfCallDriver+0x31
nt!IopfCallDriver+0x31
fltMgr!FltpDispatch+0x152
nt!IopfCallDriver+0x31
nt!IoSynchronousPageWrite+0xaf
nt!MiFlushSectionInternal+0x3f8
nt!MmFlushSection+0x1f2
nt!CcFlushCache+0x3a0
Ntfs!LfsFlushLfcb+0x227
Ntfs!LfsFlushLbcb+0x81
Ntfs!LfsFlushToLsnPriv+0xf3
Ntfs!LfsFlushToLsn+0x8e
Ntfs!NtfsCommitCurrentTransaction+0x215
Ntfs!NtfsCompleteRequest+0x1d
Ntfs!NtfsCommonCleanup+0x2604
Ntfs!NtfsFsdCleanup+0xcf
nt!IopfCallDriver+0x31
nt!IopCloseFile+0x26b
nt!ObpDecrementHandleCount+0xd8
nt!ObpCloseHandleTableEntry+0x14d
nt!ObpCloseHandle+0x87
nt!NtClose+0x1d
nt!KiFastCallEntry+0xfc
nt!ZwClose+0x11





I’ve calculated the amount of stack space that ZwClose consumes and it
seems to be about 3.5kb. From what I’ve been reading, my kernel stack
is only 12KB, so this is consuming more than 25% of my stack! Is there
a common way to deal with this issue? Can I push this task off to
another thread (worker thread) possibly using IoAllocateWorkItem,
IoQueueWorkItem? Any advice will be greatly appreciated.

Thanks

KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
check if your driver is not causing this issue. If it does you need to
fix it first.

Krzysztof Uchronski

-----Original Message-----
From: Jonathon [mailto:xxxxx@gmail.com]
Posted At: Wednesday, March 03, 2010 3:28 PM
Posted To: ntdev
Conversation: ZwClose + stack overflow?
Subject: ZwClose + stack overflow?

Hello,

I’ve been battling a stack issue for a couple of days now and I’ve
figured out what is causing the problem. I noticed in my stack trace
that after calling ZwClose on a file, my stack trace looks like
something below:

Ntfs!_SEH_prolog+0x1a
Ntfs!NtfsPrepareBuffers+0x270
Ntfs!NtfsNonCachedIo+0x20e
Ntfs!NtfsCommonWrite+0x1821
Ntfs!NtfsFsdWrite+0xf3
nt!IopfCallDriver+0x31
fltMgr!FltpDispatch+0x152
nt!IopfCallDriver+0x31
sr!SrWrite+0xaa
nt!IopfCallDriver+0x31
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
fltMgr!FltpDispatch+0x11f
nt!IopfCallDriver+0x31
nt!IopfCallDriver+0x31
fltMgr!FltpDispatch+0x152
nt!IopfCallDriver+0x31
nt!IoSynchronousPageWrite+0xaf
nt!MiFlushSectionInternal+0x3f8
nt!MmFlushSection+0x1f2
nt!CcFlushCache+0x3a0
Ntfs!LfsFlushLfcb+0x227
Ntfs!LfsFlushLbcb+0x81
Ntfs!LfsFlushToLsnPriv+0xf3
Ntfs!LfsFlushToLsn+0x8e
Ntfs!NtfsCommitCurrentTransaction+0x215
Ntfs!NtfsCompleteRequest+0x1d
Ntfs!NtfsCommonCleanup+0x2604
Ntfs!NtfsFsdCleanup+0xcf
nt!IopfCallDriver+0x31
nt!IopCloseFile+0x26b
nt!ObpDecrementHandleCount+0xd8
nt!ObpCloseHandleTableEntry+0x14d
nt!ObpCloseHandle+0x87
nt!NtClose+0x1d
nt!KiFastCallEntry+0xfc
nt!ZwClose+0x11





I’ve calculated the amount of stack space that ZwClose consumes and it
seems to be about 3.5kb. From what I’ve been reading, my kernel stack
is only 12KB, so this is consuming more than 25% of my stack! Is there
a common way to deal with this issue? Can I push this task off to
another thread (worker thread) possibly using IoAllocateWorkItem,
IoQueueWorkItem? Any advice will be greatly appreciated.

Thanks

Thanks for the reply. Currently, my code is just closing the file
handle which then produces the stack described earlier. I’ve used the
“knf” command in windbg which outputs the size of each frame and I’ve
determined that all the frames after ZwClose ~ 3.5KB. That seems a
bit much. Are there techniques that I can use to improve this?

On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski wrote:
> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
> check if your driver is not causing this issue. If it does you need to
> fix it first.
>
> Krzysztof Uchronski
>
> -----Original Message-----
> From: Jonathon [mailto:xxxxx@gmail.com]
> Posted At: Wednesday, March 03, 2010 3:28 PM
> Posted To: ntdev
> Conversation: ZwClose + stack overflow?
> Subject: ZwClose + stack overflow?
>
> Hello,
>
> I’ve been battling a stack issue for a couple of days now and I’ve
> figured out what is causing the problem. ?I noticed in my stack trace
> that after calling ZwClose on a file, my stack trace looks like
> something below:
>
> Ntfs!_SEH_prolog+0x1a
> Ntfs!NtfsPrepareBuffers+0x270
> Ntfs!NtfsNonCachedIo+0x20e
> Ntfs!NtfsCommonWrite+0x1821
> Ntfs!NtfsFsdWrite+0xf3
> nt!IopfCallDriver+0x31
> fltMgr!FltpDispatch+0x152
> nt!IopfCallDriver+0x31
> sr!SrWrite+0xaa
> nt!IopfCallDriver+0x31
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> fltMgr!FltpDispatch+0x11f
> nt!IopfCallDriver+0x31
> nt!IopfCallDriver+0x31
> fltMgr!FltpDispatch+0x152
> nt!IopfCallDriver+0x31
> nt!IoSynchronousPageWrite+0xaf
> nt!MiFlushSectionInternal+0x3f8
> nt!MmFlushSection+0x1f2
> nt!CcFlushCache+0x3a0
> Ntfs!LfsFlushLfcb+0x227
> Ntfs!LfsFlushLbcb+0x81
> Ntfs!LfsFlushToLsnPriv+0xf3
> Ntfs!LfsFlushToLsn+0x8e
> Ntfs!NtfsCommitCurrentTransaction+0x215
> Ntfs!NtfsCompleteRequest+0x1d
> Ntfs!NtfsCommonCleanup+0x2604
> Ntfs!NtfsFsdCleanup+0xcf
> nt!IopfCallDriver+0x31
> nt!IopCloseFile+0x26b
> nt!ObpDecrementHandleCount+0xd8
> nt!ObpCloseHandleTableEntry+0x14d
> nt!ObpCloseHandle+0x87
> nt!NtClose+0x1d
> nt!KiFastCallEntry+0xfc
> nt!ZwClose+0x11
>
> …
> …
> …
>
> I’ve calculated the amount of stack space that ZwClose consumes and it
> seems to be about 3.5kb. ?From what I’ve been reading, my kernel stack
> is only 12KB, so this is consuming more than 25% of my stack! Is there
> a common way to deal with this issue? Can I push this task off to
> another thread (worker thread) possibly using IoAllocateWorkItem,
> IoQueueWorkItem? ?Any advice will be greatly appreciated.
>
> Thanks
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
>

How are the other 8kb of stack being consumed? In what context/stack are you closing the handle?

d

tiny phone keyboard + fat thumbs = you do the muth

-----Original Message-----
From: Jonathon
Sent: Wednesday, March 03, 2010 8:24 AM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] ZwClose + stack overflow?

Thanks for the reply. Currently, my code is just closing the file
handle which then produces the stack described earlier. I’ve used the
“knf” command in windbg which outputs the size of each frame and I’ve
determined that all the frames after ZwClose ~ 3.5KB. That seems a
bit much. Are there techniques that I can use to improve this?

On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski wrote:
> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
> check if your driver is not causing this issue. If it does you need to
> fix it first.
>
> Krzysztof Uchronski
>
> -----Original Message-----
> From: Jonathon [mailto:xxxxx@gmail.com]
> Posted At: Wednesday, March 03, 2010 3:28 PM
> Posted To: ntdev
> Conversation: ZwClose + stack overflow?
> Subject: ZwClose + stack overflow?
>
> Hello,
>
> I’ve been battling a stack issue for a couple of days now and I’ve
> figured out what is causing the problem. ?I noticed in my stack trace
> that after calling ZwClose on a file, my stack trace looks like
> something below:
>
> Ntfs!_SEH_prolog+0x1a
> Ntfs!NtfsPrepareBuffers+0x270
> Ntfs!NtfsNonCachedIo+0x20e
> Ntfs!NtfsCommonWrite+0x1821
> Ntfs!NtfsFsdWrite+0xf3
> nt!IopfCallDriver+0x31
> fltMgr!FltpDispatch+0x152
> nt!IopfCallDriver+0x31
> sr!SrWrite+0xaa
> nt!IopfCallDriver+0x31
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> fltMgr!FltpDispatch+0x11f
> nt!IopfCallDriver+0x31
> nt!IopfCallDriver+0x31
> fltMgr!FltpDispatch+0x152
> nt!IopfCallDriver+0x31
> nt!IoSynchronousPageWrite+0xaf
> nt!MiFlushSectionInternal+0x3f8
> nt!MmFlushSection+0x1f2
> nt!CcFlushCache+0x3a0
> Ntfs!LfsFlushLfcb+0x227
> Ntfs!LfsFlushLbcb+0x81
> Ntfs!LfsFlushToLsnPriv+0xf3
> Ntfs!LfsFlushToLsn+0x8e
> Ntfs!NtfsCommitCurrentTransaction+0x215
> Ntfs!NtfsCompleteRequest+0x1d
> Ntfs!NtfsCommonCleanup+0x2604
> Ntfs!NtfsFsdCleanup+0xcf
> nt!IopfCallDriver+0x31
> nt!IopCloseFile+0x26b
> nt!ObpDecrementHandleCount+0xd8
> nt!ObpCloseHandleTableEntry+0x14d
> nt!ObpCloseHandle+0x87
> nt!NtClose+0x1d
> nt!KiFastCallEntry+0xfc
> nt!ZwClose+0x11
>
> …
> …
> …
>
> I’ve calculated the amount of stack space that ZwClose consumes and it
> seems to be about 3.5kb. ?From what I’ve been reading, my kernel stack
> is only 12KB, so this is consuming more than 25% of my stack! Is there
> a common way to deal with this issue? Can I push this task off to
> another thread (worker thread) possibly using IoAllocateWorkItem,
> IoQueueWorkItem? ?Any advice will be greatly appreciated.
>
> Thanks
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
>


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Can you post the entire kf output? Also, before you do the kf make sure you
do a “.kframes 1000” to set the default stack depth to the max.

-scott


Scott Noone
Consulting Associate
OSR Open Systems Resources, Inc.
http://www.osronline.com

“Jonathon” wrote in message news:xxxxx@ntdev…
> Thanks for the reply. Currently, my code is just closing the file
> handle which then produces the stack described earlier. I’ve used the
> “knf” command in windbg which outputs the size of each frame and I’ve
> determined that all the frames after ZwClose ~ 3.5KB. That seems a
> bit much. Are there techniques that I can use to improve this?
>
>
>
> On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski
> wrote:
>> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
>> check if your driver is not causing this issue. If it does you need to
>> fix it first.
>>
>> Krzysztof Uchronski
>>
>> -----Original Message-----
>> From: Jonathon [mailto:xxxxx@gmail.com]
>> Posted At: Wednesday, March 03, 2010 3:28 PM
>> Posted To: ntdev
>> Conversation: ZwClose + stack overflow?
>> Subject: ZwClose + stack overflow?
>>
>> Hello,
>>
>> I’ve been battling a stack issue for a couple of days now and I’ve
>> figured out what is causing the problem. I noticed in my stack trace
>> that after calling ZwClose on a file, my stack trace looks like
>> something below:
>>
>> Ntfs!_SEH_prolog+0x1a
>> Ntfs!NtfsPrepareBuffers+0x270
>> Ntfs!NtfsNonCachedIo+0x20e
>> Ntfs!NtfsCommonWrite+0x1821
>> Ntfs!NtfsFsdWrite+0xf3
>> nt!IopfCallDriver+0x31
>> fltMgr!FltpDispatch+0x152
>> nt!IopfCallDriver+0x31
>> sr!SrWrite+0xaa
>> nt!IopfCallDriver+0x31
>> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
>> fltMgr!FltpDispatch+0x11f
>> nt!IopfCallDriver+0x31
>> nt!IopfCallDriver+0x31
>> fltMgr!FltpDispatch+0x152
>> nt!IopfCallDriver+0x31
>> nt!IoSynchronousPageWrite+0xaf
>> nt!MiFlushSectionInternal+0x3f8
>> nt!MmFlushSection+0x1f2
>> nt!CcFlushCache+0x3a0
>> Ntfs!LfsFlushLfcb+0x227
>> Ntfs!LfsFlushLbcb+0x81
>> Ntfs!LfsFlushToLsnPriv+0xf3
>> Ntfs!LfsFlushToLsn+0x8e
>> Ntfs!NtfsCommitCurrentTransaction+0x215
>> Ntfs!NtfsCompleteRequest+0x1d
>> Ntfs!NtfsCommonCleanup+0x2604
>> Ntfs!NtfsFsdCleanup+0xcf
>> nt!IopfCallDriver+0x31
>> nt!IopCloseFile+0x26b
>> nt!ObpDecrementHandleCount+0xd8
>> nt!ObpCloseHandleTableEntry+0x14d
>> nt!ObpCloseHandle+0x87
>> nt!NtClose+0x1d
>> nt!KiFastCallEntry+0xfc
>> nt!ZwClose+0x11
>>
>> …
>> …
>> …
>>
>> I’ve calculated the amount of stack space that ZwClose consumes and it
>> seems to be about 3.5kb. From what I’ve been reading, my kernel stack
>> is only 12KB, so this is consuming more than 25% of my stack! Is there
>> a common way to deal with this issue? Can I push this task off to
>> another thread (worker thread) possibly using IoAllocateWorkItem,
>> IoQueueWorkItem? Any advice will be greatly appreciated.
>>
>> Thanks
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>

thanks for the quick replies. Unfortunately, I can’t give the rest of
the stack. I am actually seeing this issue within the OsrDmk. I’ve
implemented a filter driver that sits above the OsrDmk and I notice
that when it is freeing file object handles by calling ZwClose, I get
that huge stack trace mentioned earlier. I started digging through the
source and I noticed that the OsrDmk does a lot of inline closes. One
solution I had was to create a worker thread that performed closes on
a separate thread. This should drastically reduce the size of the
stack since the inline closes are adding about ~3-4kb to my stack. I
am not really sure where to go from here. I’ll probably try to
implement this solution and see if it helps. Is it possible to
increase the kernel stack size by chance? I’d like to increase it a
little for testing purposes and see if it fixes the issue.

Thanks

On Wed, Mar 3, 2010 at 8:29 AM, Scott Noone wrote:
> Can you post the entire kf output? Also, before you do the kf make sure you
> do a “.kframes 1000” to set the default stack depth to the max.
>
> -scott
>
> –
> Scott Noone
> Consulting Associate
> OSR Open Systems Resources, Inc.
> http://www.osronline.com
>
>
> “Jonathon” wrote in message news:xxxxx@ntdev…
>>
>> Thanks for the reply. ?Currently, my code is just closing the file
>> handle which then produces the stack described earlier. ?I’ve used the
>> “knf” command in windbg which outputs the size of each frame and I’ve
>> determined that all the frames after ZwClose ~ 3.5KB. ?That seems a
>> bit much. ?Are there techniques that I can use to improve this?
>>
>>
>>
>> On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski
>> wrote:
>>>
>>> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
>>> check if your driver is not causing this issue. If it does you need to
>>> fix it first.
>>>
>>> Krzysztof Uchronski
>>>
>>> -----Original Message-----
>>> From: Jonathon [mailto:xxxxx@gmail.com]
>>> Posted At: Wednesday, March 03, 2010 3:28 PM
>>> Posted To: ntdev
>>> Conversation: ZwClose + stack overflow?
>>> Subject: ZwClose + stack overflow?
>>>
>>> Hello,
>>>
>>> I’ve been battling a stack issue for a couple of days now and I’ve
>>> figured out what is causing the problem. ?I noticed in my stack trace
>>> that after calling ZwClose on a file, my stack trace looks like
>>> something below:
>>>
>>> Ntfs!_SEH_prolog+0x1a
>>> Ntfs!NtfsPrepareBuffers+0x270
>>> Ntfs!NtfsNonCachedIo+0x20e
>>> Ntfs!NtfsCommonWrite+0x1821
>>> Ntfs!NtfsFsdWrite+0xf3
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpDispatch+0x152
>>> nt!IopfCallDriver+0x31
>>> sr!SrWrite+0xaa
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
>>> fltMgr!FltpDispatch+0x11f
>>> nt!IopfCallDriver+0x31
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpDispatch+0x152
>>> nt!IopfCallDriver+0x31
>>> nt!IoSynchronousPageWrite+0xaf
>>> nt!MiFlushSectionInternal+0x3f8
>>> nt!MmFlushSection+0x1f2
>>> nt!CcFlushCache+0x3a0
>>> Ntfs!LfsFlushLfcb+0x227
>>> Ntfs!LfsFlushLbcb+0x81
>>> Ntfs!LfsFlushToLsnPriv+0xf3
>>> Ntfs!LfsFlushToLsn+0x8e
>>> Ntfs!NtfsCommitCurrentTransaction+0x215
>>> Ntfs!NtfsCompleteRequest+0x1d
>>> Ntfs!NtfsCommonCleanup+0x2604
>>> Ntfs!NtfsFsdCleanup+0xcf
>>> nt!IopfCallDriver+0x31
>>> nt!IopCloseFile+0x26b
>>> nt!ObpDecrementHandleCount+0xd8
>>> nt!ObpCloseHandleTableEntry+0x14d
>>> nt!ObpCloseHandle+0x87
>>> nt!NtClose+0x1d
>>> nt!KiFastCallEntry+0xfc
>>> nt!ZwClose+0x11
>>>
>>> …
>>> …
>>> …
>>>
>>> I’ve calculated the amount of stack space that ZwClose consumes and it
>>> seems to be about 3.5kb. ?From what I’ve been reading, my kernel stack
>>> is only 12KB, so this is consuming more than 25% of my stack! Is there
>>> a common way to deal with this issue? Can I push this task off to
>>> another thread (worker thread) possibly using IoAllocateWorkItem,
>>> IoQueueWorkItem? ?Any advice will be greatly appreciated.
>>>
>>> Thanks
>>>
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>>
>>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>

No you can’t increase the km stack size. I hope you realize that by withholding information/context of what you are doing that the abiltiy for anyone to help goes down dramatically.

d

tiny phone keyboard + fat thumbs = you do the muth

-----Original Message-----
From: Jonathon
Sent: Thursday, March 04, 2010 7:45 AM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] ZwClose + stack overflow?

thanks for the quick replies. Unfortunately, I can’t give the rest of
the stack. I am actually seeing this issue within the OsrDmk. I’ve
implemented a filter driver that sits above the OsrDmk and I notice
that when it is freeing file object handles by calling ZwClose, I get
that huge stack trace mentioned earlier. I started digging through the
source and I noticed that the OsrDmk does a lot of inline closes. One
solution I had was to create a worker thread that performed closes on
a separate thread. This should drastically reduce the size of the
stack since the inline closes are adding about ~3-4kb to my stack. I
am not really sure where to go from here. I’ll probably try to
implement this solution and see if it helps. Is it possible to
increase the kernel stack size by chance? I’d like to increase it a
little for testing purposes and see if it fixes the issue.

Thanks

On Wed, Mar 3, 2010 at 8:29 AM, Scott Noone wrote:
> Can you post the entire kf output? Also, before you do the kf make sure you
> do a “.kframes 1000” to set the default stack depth to the max.
>
> -scott
>
> –
> Scott Noone
> Consulting Associate
> OSR Open Systems Resources, Inc.
> http://www.osronline.com
>
>
> “Jonathon” wrote in message news:xxxxx@ntdev…
>>
>> Thanks for the reply. ?Currently, my code is just closing the file
>> handle which then produces the stack described earlier. ?I’ve used the
>> “knf” command in windbg which outputs the size of each frame and I’ve
>> determined that all the frames after ZwClose ~ 3.5KB. ?That seems a
>> bit much. ?Are there techniques that I can use to improve this?
>>
>>
>>
>> On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski
>> wrote:
>>>
>>> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
>>> check if your driver is not causing this issue. If it does you need to
>>> fix it first.
>>>
>>> Krzysztof Uchronski
>>>
>>> -----Original Message-----
>>> From: Jonathon [mailto:xxxxx@gmail.com]
>>> Posted At: Wednesday, March 03, 2010 3:28 PM
>>> Posted To: ntdev
>>> Conversation: ZwClose + stack overflow?
>>> Subject: ZwClose + stack overflow?
>>>
>>> Hello,
>>>
>>> I’ve been battling a stack issue for a couple of days now and I’ve
>>> figured out what is causing the problem. ?I noticed in my stack trace
>>> that after calling ZwClose on a file, my stack trace looks like
>>> something below:
>>>
>>> Ntfs!_SEH_prolog+0x1a
>>> Ntfs!NtfsPrepareBuffers+0x270
>>> Ntfs!NtfsNonCachedIo+0x20e
>>> Ntfs!NtfsCommonWrite+0x1821
>>> Ntfs!NtfsFsdWrite+0xf3
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpDispatch+0x152
>>> nt!IopfCallDriver+0x31
>>> sr!SrWrite+0xaa
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
>>> fltMgr!FltpDispatch+0x11f
>>> nt!IopfCallDriver+0x31
>>> nt!IopfCallDriver+0x31
>>> fltMgr!FltpDispatch+0x152
>>> nt!IopfCallDriver+0x31
>>> nt!IoSynchronousPageWrite+0xaf
>>> nt!MiFlushSectionInternal+0x3f8
>>> nt!MmFlushSection+0x1f2
>>> nt!CcFlushCache+0x3a0
>>> Ntfs!LfsFlushLfcb+0x227
>>> Ntfs!LfsFlushLbcb+0x81
>>> Ntfs!LfsFlushToLsnPriv+0xf3
>>> Ntfs!LfsFlushToLsn+0x8e
>>> Ntfs!NtfsCommitCurrentTransaction+0x215
>>> Ntfs!NtfsCompleteRequest+0x1d
>>> Ntfs!NtfsCommonCleanup+0x2604
>>> Ntfs!NtfsFsdCleanup+0xcf
>>> nt!IopfCallDriver+0x31
>>> nt!IopCloseFile+0x26b
>>> nt!ObpDecrementHandleCount+0xd8
>>> nt!ObpCloseHandleTableEntry+0x14d
>>> nt!ObpCloseHandle+0x87
>>> nt!NtClose+0x1d
>>> nt!KiFastCallEntry+0xfc
>>> nt!ZwClose+0x11
>>>
>>> …
>>> …
>>> …
>>>
>>> I’ve calculated the amount of stack space that ZwClose consumes and it
>>> seems to be about 3.5kb. ?From what I’ve been reading, my kernel stack
>>> is only 12KB, so this is consuming more than 25% of my stack! Is there
>>> a common way to deal with this issue? Can I push this task off to
>>> another thread (worker thread) possibly using IoAllocateWorkItem,
>>> IoQueueWorkItem? ?Any advice will be greatly appreciated.
>>>
>>> Thanks
>>>
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>>
>>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>


NTDEV is sponsored by OSR

For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

On Thu, Mar 4, 2010 at 5:48 PM, Doron Holan wrote:
> No you can’t increase the km stack size.

There’s KeExpandKernelStackAndCallout() for post 2k3 systems, not
quite the same thing as permanently increasing the kernel stack, but
it would do what you want for testing purposes.


Aram Hăvărneanu

On Thu, Mar 4, 2010 at 5:52 PM, Aram Hăvărneanu wrote:
> On Thu, Mar 4, 2010 at 5:48 PM, Doron Holan wrote:
>> No you can’t increase the km stack size.
>
> There’s KeExpandKernelStackAndCallout() for post 2k3 systems, not
> quite the same thing as permanently increasing the kernel stack, but
> it would do what you want for testing purposes.
>

Also, doesn’t NTFS somehow increase the kernel stack to 64kB using
some internal mechanism making the above problem non existent? I’m
speaking folklore here, but I read that somewhere.


Aram Hăvărneanu

Very true! Okay, to protect the innocent, I’ve obscured it a little…
:slight_smile: but I hope this helps… Thanks

0: kd> knf 100

Memory ChildEBP RetAddr

00 8054d7c4 804f8e09 nt!RtlpBreakWithStatusInstruction
01 4c 8054d810 804f9ef5 nt!KiBugCheckDebugBreak+0x19
02 3e0 8054dbf0 804f9f43 nt!KeBugCheck2+0xa75
03 20 8054dc10 80669e00 nt!KeBugCheckEx+0x1b
04 18 8054dc28 8066ae93 nt!KdpCauseBugCheck+0x10
05 70 8054dc98 8066b000 nt!KdpSendWaitContinue+0x319
06 120 8054ddb8 804f7e30 nt!KdpReportExceptionStateChange+0x8a
07 20 8054ddd8 8066c300 nt!KdpReport+0x60
08 2c 8054de04 804fe59f nt!KdpTrap+0x108
09 3cc 8054e1d0 805420a5 nt!KiDispatchException+0x129
0a 68 8054e238 805427fe nt!CommonDispatchException+0x4d
0b 0 8054e238 8052b5f9 nt!KiTrap03+0xae
0c 78 8054e2b0 804f8e09 nt!RtlpBreakWithStatusInstruction+0x1
0d 4c 8054e2fc 804f99f4 nt!KiBugCheckDebugBreak+0x19
0e 3e0 8054e6dc 80543571 nt!KeBugCheck2+0x574
0f 0 8054e6dc ba52a33a nt!KiTrap08+0x48
10 3a7729f4 bacc10d0 ba52affc Ntfs!_SEH_prolog+0x1a
11 1d0 bacc12a0 ba52ac76 Ntfs!NtfsPrepareBuffers+0x270
12 1e8 bacc1488 ba52bfbc Ntfs!NtfsNonCachedIo+0x20e
13 1f8 bacc1680 ba52bc18 Ntfs!NtfsCommonWrite+0x1821
14 174 bacc17f4 804ef19f Ntfs!NtfsFsdWrite+0xf3
15 10 bacc1804 ba6ef09e nt!IopfCallDriver+0x31
16 2c bacc1830 804ef19f fltMgr!FltpDispatch+0x152
17 10 bacc1840 ba5fb3ca nt!IopfCallDriver+0x31
18 10 bacc1850 804ef19f sr!SrWrite+0xaa
19 10 bacc1860 ba6eee9b nt!IopfCallDriver+0x31
1a 24 bacc1884 ba6ef06b
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
1b 38 bacc18bc 804ef19f fltMgr!FltpDispatch+0x11f
1c 10 bacc18cc b9e77084 nt!IopfCallDriver+0x31
1d 10 bacc18dc 804ef19f …
1e 10 bacc18ec ba6ef09e nt!IopfCallDriver+0x31
1f 2c bacc1918 804ef19f fltMgr!FltpDispatch+0x152
20 10 bacc1928 804f04e9 nt!IopfCallDriver+0x31
21 14 bacc193c 8050f08e nt!IoSynchronousPageWrite+0xaf
22 e8 bacc1a24 8050fab0 nt!MiFlushSectionInternal+0x3f8
23 3c bacc1a60 804e4554 nt!MmFlushSection+0x1f2
24 88 bacc1ae8 ba54c007 nt!CcFlushCache+0x3a0
25 c8 bacc1bb0 ba54c089 Ntfs!LfsFlushLfcb+0x227
26 24 bacc1bd4 ba5563db Ntfs!LfsFlushLbcb+0x81
27 28 bacc1bfc ba54ac60 Ntfs!LfsFlushToLsnPriv+0xf3
28 40 bacc1c3c ba556b36 Ntfs!LfsFlushToLsn+0x8e
29 34 bacc1c70 ba52c6dc Ntfs!NtfsCommitCurrentTransaction+0x215
2a 14 bacc1c84 ba54fc9d Ntfs!NtfsCompleteRequest+0x1d
2b 210 bacc1e94 ba54fd4d Ntfs!NtfsCommonCleanup+0x2604
2c 178 bacc200c 804ef19f Ntfs!NtfsFsdCleanup+0xcf
2d 10 bacc201c 80583953 nt!IopfCallDriver+0x31
2e 30 bacc204c 805bca18 nt!IopCloseFile+0x26b
2f 34 bacc2080 805bc341 nt!ObpDecrementHandleCount+0xd8
30 28 bacc20a8 805bc3df nt!ObpCloseHandleTableEntry+0x14d
31 48 bacc20f0 805bc517 nt!ObpCloseHandle+0x87
32 14 bacc2104 8054163c nt!NtClose+0x1d
33 0 bacc2104 804fff41 nt!KiFastCallEntry+0xfc
34 7c bacc2180 ba6dc5c6 nt!ZwClose+0x11
35 10 bacc2190 ba6d3827 OsrDsManager!..
36 24 bacc21b4 ba6d3525 OsrDsManager!..
37 10 bacc21c4 ba6c627e OsrDsManager!..
38 10 bacc21d4 ba65486e OsrDsManager!..
39 44 bacc2218 ba63751f OsrDmk!..
3a 64 bacc227c ba62aec6 OsrDmk!..
3b 78 bacc22f4 ba625feb OsrDmk!..
3c 7c bacc2370 ba625b26 OsrDmk!..
3d b0 bacc2420 ba6493e6 OsrDmk!..
3e 38 bacc2458 ba64921e OsrDmk!..
3f 18 bacc2470 804ef19f OsrDmk!..
40 10 bacc2480 ba6fb6c3 nt!IopfCallDriver+0x31
41 30 bacc24b0 804ef19f fltMgr!FltpCreate+0x1d9
42 10 bacc24c0 ba6eee9b nt!IopfCallDriver+0x31
43 24 bacc24e4 ba6fb754
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
44 3c bacc2520 804ef19f fltMgr!FltpCreate+0x26a
45 10 bacc2530 b9e77084 nt!IopfCallDriver+0x31
46 10 bacc2540 804ef19f …
47 10 bacc2550 ba6eee9b nt!IopfCallDriver+0x31
48 24 bacc2574 ba6fb754
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
49 3c bacc25b0 804ef19f fltMgr!FltpCreate+0x26a
4a 10 bacc25c0 805831fa nt!IopfCallDriver+0x31
4b e0 bacc26a0 805bf452 nt!IopParseDevice+0xa12
4c 78 bacc2718 805bb9de nt!ObpLookupObjectName+0x53c
4d 54 bacc276c 80576033 nt!ObOpenObjectByName+0xea
4e 7c bacc27e8 805769aa nt!IopCreateFile+0x407
4f 5c bacc2844 805790b4 nt!IoCreateFile+0x8e
50 40 bacc2884 8054163c nt!NtCreateFile+0x30
51 0 bacc2884 80500031 nt!KiFastCallEntry+0xfc
52 a4 bacc2928 b9e9b0fc nt!ZwCreateFile+0x11
53 84 bacc29ac b9ea3b9f …
54 78 bacc2a24 b9ea492b …
55 38 bacc2a5c b9e77773 …
56 88 bacc2ae4 b9e769dc …
57 4c bacc2b30 b9e77065 …
58 18 bacc2b48 804ef19f …
59 10 bacc2b58 ba6eee9b nt!IopfCallDriver+0x31
5a 24 bacc2b7c ba6fb754
fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
5b 3c bacc2bb8 804ef19f fltMgr!FltpCreate+0x26a
5c 10 bacc2bc8 805831fa nt!IopfCallDriver+0x31
5d e0 bacc2ca8 805bf452 nt!IopParseDevice+0xa12
5e 78 bacc2d20 805bb9de nt!ObpLookupObjectName+0x53c
5f 54 bacc2d74 80576033 nt!ObOpenObjectByName+0xea
60 7c bacc2df0 805769aa nt!IopCreateFile+0x407
61 5c bacc2e4c b9c72e6d nt!IoCreateFile+0x8e
WARNING: Stack unwind information not available. Following frames may be wrong.
62 60 bacc2eac b9c700b4 …
63 64 bacc2f10 b9c71090 …
64 84 bacc2f94 b9c715b2 …
65 64 bacc2ff8 b9c716f8 …
66 18 bacc3010 b9c71868 …
67 ac bacc30bc b9c6ecb1 …
68 14 bacc30d0 b9c720f2 …
69 28 bacc30f8 805d047f …
6a 144 bacc323c 805d10de nt!PspCreateThread+0x3a7
6b 78 bacc32b4 b9c7b5e9 nt!NtCreateThread+0xfc
6c b4 bacc3368 8054163c …
6d 0 bacc3368 00000000 nt!KiFastCallEntry+0xfc
6e 70 bacc33d8 bacc3758 0x0
6f 390 bacc3768 806b00df 0xbacc3758
70 9c bacc3804 80539acf nt!RtlCreateUserProcess+0x34f
71 5d8 bacc3ddc 8054611e nt!_except_handler2+0xb7
72 4 bacc3de0 8069790b nt!KiThreadStartup+0x16
73 4 bacc3de4 80088000 nt!CreateSystemRootLink+0x4a9
74 4 bacc3de8 00000000 0x80088000

On Thu, Mar 4, 2010 at 7:48 AM, Doron Holan wrote:
> No you can’t increase the km stack size. I hope you realize that by withholding information/context of what you are doing that the abiltiy for anyone to help goes down dramatically.
>
> d
>
> tiny phone keyboard + fat thumbs = you do the muth
>
>
>
> -----Original Message-----
> From: Jonathon
> Sent: Thursday, March 04, 2010 7:45 AM
> To: Windows System Software Devs Interest List
> Subject: Re: [ntdev] ZwClose + stack overflow?
>
> thanks for the quick replies. ?Unfortunately, I can’t give the rest of
> the stack. ?I am actually seeing this issue within the OsrDmk. ?I’ve
> implemented a filter driver that sits above the OsrDmk and I notice
> that when it is freeing file object handles by calling ZwClose, I get
> that huge stack trace mentioned earlier. I started digging through the
> source and I noticed that the OsrDmk does a lot of inline closes. ?One
> solution I had was to create a worker thread that performed closes on
> a separate thread. This should drastically reduce the size of the
> stack since the inline closes are adding about ~3-4kb to my stack. ?I
> am not really sure where to go from here. ?I’ll probably try to
> implement this solution and see if it helps. ?Is it possible to
> increase the kernel stack size by chance? ?I’d like to increase it a
> little for testing purposes and see if it fixes the issue.
>
> Thanks
>
> On Wed, Mar 3, 2010 at 8:29 AM, Scott Noone wrote:
>> Can you post the entire kf output? Also, before you do the kf make sure you
>> do a “.kframes 1000” to set the default stack depth to the max.
>>
>> -scott
>>
>> –
>> Scott Noone
>> Consulting Associate
>> OSR Open Systems Resources, Inc.
>> http://www.osronline.com
>>
>>
>> “Jonathon” wrote in message news:xxxxx@ntdev…
>>>
>>> Thanks for the reply. ?Currently, my code is just closing the file
>>> handle which then produces the stack described earlier. ?I’ve used the
>>> “knf” command in windbg which outputs the size of each frame and I’ve
>>> determined that all the frames after ZwClose ~ 3.5KB. ?That seems a
>>> bit much. ?Are there techniques that I can use to improve this?
>>>
>>>
>>>
>>> On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski
>>> wrote:
>>>>
>>>> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
>>>> check if your driver is not causing this issue. If it does you need to
>>>> fix it first.
>>>>
>>>> Krzysztof Uchronski
>>>>
>>>> -----Original Message-----
>>>> From: Jonathon [mailto:xxxxx@gmail.com]
>>>> Posted At: Wednesday, March 03, 2010 3:28 PM
>>>> Posted To: ntdev
>>>> Conversation: ZwClose + stack overflow?
>>>> Subject: ZwClose + stack overflow?
>>>>
>>>> Hello,
>>>>
>>>> I’ve been battling a stack issue for a couple of days now and I’ve
>>>> figured out what is causing the problem. ?I noticed in my stack trace
>>>> that after calling ZwClose on a file, my stack trace looks like
>>>> something below:
>>>>
>>>> Ntfs!_SEH_prolog+0x1a
>>>> Ntfs!NtfsPrepareBuffers+0x270
>>>> Ntfs!NtfsNonCachedIo+0x20e
>>>> Ntfs!NtfsCommonWrite+0x1821
>>>> Ntfs!NtfsFsdWrite+0xf3
>>>> nt!IopfCallDriver+0x31
>>>> fltMgr!FltpDispatch+0x152
>>>> nt!IopfCallDriver+0x31
>>>> sr!SrWrite+0xaa
>>>> nt!IopfCallDriver+0x31
>>>> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
>>>> fltMgr!FltpDispatch+0x11f
>>>> nt!IopfCallDriver+0x31
>>>> nt!IopfCallDriver+0x31
>>>> fltMgr!FltpDispatch+0x152
>>>> nt!IopfCallDriver+0x31
>>>> nt!IoSynchronousPageWrite+0xaf
>>>> nt!MiFlushSectionInternal+0x3f8
>>>> nt!MmFlushSection+0x1f2
>>>> nt!CcFlushCache+0x3a0
>>>> Ntfs!LfsFlushLfcb+0x227
>>>> Ntfs!LfsFlushLbcb+0x81
>>>> Ntfs!LfsFlushToLsnPriv+0xf3
>>>> Ntfs!LfsFlushToLsn+0x8e
>>>> Ntfs!NtfsCommitCurrentTransaction+0x215
>>>> Ntfs!NtfsCompleteRequest+0x1d
>>>> Ntfs!NtfsCommonCleanup+0x2604
>>>> Ntfs!NtfsFsdCleanup+0xcf
>>>> nt!IopfCallDriver+0x31
>>>> nt!IopCloseFile+0x26b
>>>> nt!ObpDecrementHandleCount+0xd8
>>>> nt!ObpCloseHandleTableEntry+0x14d
>>>> nt!ObpCloseHandle+0x87
>>>> nt!NtClose+0x1d
>>>> nt!KiFastCallEntry+0xfc
>>>> nt!ZwClose+0x11
>>>>
>>>> …
>>>> …
>>>> …
>>>>
>>>> I’ve calculated the amount of stack space that ZwClose consumes and it
>>>> seems to be about 3.5kb. ?From what I’ve been reading, my kernel stack
>>>> is only 12KB, so this is consuming more than 25% of my stack! Is there
>>>> a common way to deal with this issue? Can I push this task off to
>>>> another thread (worker thread) possibly using IoAllocateWorkItem,
>>>> IoQueueWorkItem? ?Any advice will be greatly appreciated.
>>>>
>>>> Thanks
>>>>
>>>>
>>>> —
>>>> NTDEV is sponsored by OSR
>>>>
>>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>>> http://www.osr.com/seminars
>>>>
>>>> To unsubscribe, visit the List Server section of OSR Online at
>>>> http://www.osronline.com/page.cfm?name=ListServer
>>>>
>>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
>
>
> —
> NTDEV is sponsored by OSR
>
> For our schedule of WDF, WDM, debugging and other seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
>

> Also, doesn’t NTFS somehow increase the kernel stack to 64kB using

some internal mechanism making the above problem non existent? I’m
speaking folklore here, but I read that *somewhere*.

NTFS does what everyone else does, they either post to a worker thread or
call KeExpandXxx (when available).

There is code in the O/S to give certain kernel threads larger stacks, but
that decision is buried inside the system service dispatcher and has to with
threads that make windowing (GUI) calls.

-scott


Scott Noone
Consulting Associate
OSR Open Systems Resources, Inc.
http://www.osronline.com

“Aram Hăvărneanu” wrote in message news:xxxxx@ntdev…
> On Thu, Mar 4, 2010 at 5:52 PM, Aram Hăvărneanu wrote:
>> On Thu, Mar 4, 2010 at 5:48 PM, Doron Holan
>> wrote:
>>> No you can’t increase the km stack size.
>>
>> There’s KeExpandKernelStackAndCallout() for post 2k3 systems, not
>> quite the same thing as permanently increasing the kernel stack, but
>> it would do what you want for testing purposes.
>>
>
> Also, doesn’t NTFS somehow increase the kernel stack to 64kB using
> some internal mechanism making the above problem non existent? I’m
> speaking folklore here, but I read that somewhere.
>
>
> –
> Aram Hăvărneanu
>

You hide the innocent but you leave us in? I feel ashamed :slight_smile:

Unfortunately busy stack, without more details on what exactly is going on
it’s hard to say though no one appears to be a hog.

Is that your driver calling ZwCreateFile? You’d probably be better off using
IoCreateFileSpecifyDeviceObjectHint (or FltCreateFile, as the case may be)
to avoid the recursion in the I/O. Otherwise posting is an option, though
you’ll need to worry about security context in that case.

Also, what version of the DMK is this? Looks like you’re still using a
version that utilizes WRITE_THROUGH opens, which cause a log flush on NTFS
in cleanup (unfortunate side effect). Using a later version would probably
also mitigate this. (Further DMK conversations are OT here though, so you
can follow me up privately or send email to the DMK bugs alias).

-scott


Scott Noone
Consulting Associate
OSR Open Systems Resources, Inc.
http://www.osronline.com

“Jonathon” wrote in message news:xxxxx@ntdev…
> Very true! Okay, to protect the innocent, I’ve obscured it a little…
> :slight_smile: but I hope this helps… Thanks
>
>
> 0: kd> knf 100
> # Memory ChildEBP RetAddr
> 00 8054d7c4 804f8e09 nt!RtlpBreakWithStatusInstruction
> 01 4c 8054d810 804f9ef5 nt!KiBugCheckDebugBreak+0x19
> 02 3e0 8054dbf0 804f9f43 nt!KeBugCheck2+0xa75
> 03 20 8054dc10 80669e00 nt!KeBugCheckEx+0x1b
> 04 18 8054dc28 8066ae93 nt!KdpCauseBugCheck+0x10
> 05 70 8054dc98 8066b000 nt!KdpSendWaitContinue+0x319
> 06 120 8054ddb8 804f7e30 nt!KdpReportExceptionStateChange+0x8a
> 07 20 8054ddd8 8066c300 nt!KdpReport+0x60
> 08 2c 8054de04 804fe59f nt!KdpTrap+0x108
> 09 3cc 8054e1d0 805420a5 nt!KiDispatchException+0x129
> 0a 68 8054e238 805427fe nt!CommonDispatchException+0x4d
> 0b 0 8054e238 8052b5f9 nt!KiTrap03+0xae
> 0c 78 8054e2b0 804f8e09 nt!RtlpBreakWithStatusInstruction+0x1
> 0d 4c 8054e2fc 804f99f4 nt!KiBugCheckDebugBreak+0x19
> 0e 3e0 8054e6dc 80543571 nt!KeBugCheck2+0x574
> 0f 0 8054e6dc ba52a33a nt!KiTrap08+0x48
> 10 3a7729f4 bacc10d0 ba52affc Ntfs!_SEH_prolog+0x1a
> 11 1d0 bacc12a0 ba52ac76 Ntfs!NtfsPrepareBuffers+0x270
> 12 1e8 bacc1488 ba52bfbc Ntfs!NtfsNonCachedIo+0x20e
> 13 1f8 bacc1680 ba52bc18 Ntfs!NtfsCommonWrite+0x1821
> 14 174 bacc17f4 804ef19f Ntfs!NtfsFsdWrite+0xf3
> 15 10 bacc1804 ba6ef09e nt!IopfCallDriver+0x31
> 16 2c bacc1830 804ef19f fltMgr!FltpDispatch+0x152
> 17 10 bacc1840 ba5fb3ca nt!IopfCallDriver+0x31
> 18 10 bacc1850 804ef19f sr!SrWrite+0xaa
> 19 10 bacc1860 ba6eee9b nt!IopfCallDriver+0x31
> 1a 24 bacc1884 ba6ef06b
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> 1b 38 bacc18bc 804ef19f fltMgr!FltpDispatch+0x11f
> 1c 10 bacc18cc b9e77084 nt!IopfCallDriver+0x31
> 1d 10 bacc18dc 804ef19f …
> 1e 10 bacc18ec ba6ef09e nt!IopfCallDriver+0x31
> 1f 2c bacc1918 804ef19f fltMgr!FltpDispatch+0x152
> 20 10 bacc1928 804f04e9 nt!IopfCallDriver+0x31
> 21 14 bacc193c 8050f08e nt!IoSynchronousPageWrite+0xaf
> 22 e8 bacc1a24 8050fab0 nt!MiFlushSectionInternal+0x3f8
> 23 3c bacc1a60 804e4554 nt!MmFlushSection+0x1f2
> 24 88 bacc1ae8 ba54c007 nt!CcFlushCache+0x3a0
> 25 c8 bacc1bb0 ba54c089 Ntfs!LfsFlushLfcb+0x227
> 26 24 bacc1bd4 ba5563db Ntfs!LfsFlushLbcb+0x81
> 27 28 bacc1bfc ba54ac60 Ntfs!LfsFlushToLsnPriv+0xf3
> 28 40 bacc1c3c ba556b36 Ntfs!LfsFlushToLsn+0x8e
> 29 34 bacc1c70 ba52c6dc Ntfs!NtfsCommitCurrentTransaction+0x215
> 2a 14 bacc1c84 ba54fc9d Ntfs!NtfsCompleteRequest+0x1d
> 2b 210 bacc1e94 ba54fd4d Ntfs!NtfsCommonCleanup+0x2604
> 2c 178 bacc200c 804ef19f Ntfs!NtfsFsdCleanup+0xcf
> 2d 10 bacc201c 80583953 nt!IopfCallDriver+0x31
> 2e 30 bacc204c 805bca18 nt!IopCloseFile+0x26b
> 2f 34 bacc2080 805bc341 nt!ObpDecrementHandleCount+0xd8
> 30 28 bacc20a8 805bc3df nt!ObpCloseHandleTableEntry+0x14d
> 31 48 bacc20f0 805bc517 nt!ObpCloseHandle+0x87
> 32 14 bacc2104 8054163c nt!NtClose+0x1d
> 33 0 bacc2104 804fff41 nt!KiFastCallEntry+0xfc
> 34 7c bacc2180 ba6dc5c6 nt!ZwClose+0x11
> 35 10 bacc2190 ba6d3827 OsrDsManager!..
> 36 24 bacc21b4 ba6d3525 OsrDsManager!..
> 37 10 bacc21c4 ba6c627e OsrDsManager!..
> 38 10 bacc21d4 ba65486e OsrDsManager!..
> 39 44 bacc2218 ba63751f OsrDmk!..
> 3a 64 bacc227c ba62aec6 OsrDmk!..
> 3b 78 bacc22f4 ba625feb OsrDmk!..
> 3c 7c bacc2370 ba625b26 OsrDmk!..
> 3d b0 bacc2420 ba6493e6 OsrDmk!..
> 3e 38 bacc2458 ba64921e OsrDmk!..
> 3f 18 bacc2470 804ef19f OsrDmk!..
> 40 10 bacc2480 ba6fb6c3 nt!IopfCallDriver+0x31
> 41 30 bacc24b0 804ef19f fltMgr!FltpCreate+0x1d9
> 42 10 bacc24c0 ba6eee9b nt!IopfCallDriver+0x31
> 43 24 bacc24e4 ba6fb754
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> 44 3c bacc2520 804ef19f fltMgr!FltpCreate+0x26a
> 45 10 bacc2530 b9e77084 nt!IopfCallDriver+0x31
> 46 10 bacc2540 804ef19f …
> 47 10 bacc2550 ba6eee9b nt!IopfCallDriver+0x31
> 48 24 bacc2574 ba6fb754
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> 49 3c bacc25b0 804ef19f fltMgr!FltpCreate+0x26a
> 4a 10 bacc25c0 805831fa nt!IopfCallDriver+0x31
> 4b e0 bacc26a0 805bf452 nt!IopParseDevice+0xa12
> 4c 78 bacc2718 805bb9de nt!ObpLookupObjectName+0x53c
> 4d 54 bacc276c 80576033 nt!ObOpenObjectByName+0xea
> 4e 7c bacc27e8 805769aa nt!IopCreateFile+0x407
> 4f 5c bacc2844 805790b4 nt!IoCreateFile+0x8e
> 50 40 bacc2884 8054163c nt!NtCreateFile+0x30
> 51 0 bacc2884 80500031 nt!KiFastCallEntry+0xfc
> 52 a4 bacc2928 b9e9b0fc nt!ZwCreateFile+0x11
> 53 84 bacc29ac b9ea3b9f …
> 54 78 bacc2a24 b9ea492b …
> 55 38 bacc2a5c b9e77773 …
> 56 88 bacc2ae4 b9e769dc …
> 57 4c bacc2b30 b9e77065 …
> 58 18 bacc2b48 804ef19f …
> 59 10 bacc2b58 ba6eee9b nt!IopfCallDriver+0x31
> 5a 24 bacc2b7c ba6fb754
> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
> 5b 3c bacc2bb8 804ef19f fltMgr!FltpCreate+0x26a
> 5c 10 bacc2bc8 805831fa nt!IopfCallDriver+0x31
> 5d e0 bacc2ca8 805bf452 nt!IopParseDevice+0xa12
> 5e 78 bacc2d20 805bb9de nt!ObpLookupObjectName+0x53c
> 5f 54 bacc2d74 80576033 nt!ObOpenObjectByName+0xea
> 60 7c bacc2df0 805769aa nt!IopCreateFile+0x407
> 61 5c bacc2e4c b9c72e6d nt!IoCreateFile+0x8e
> WARNING: Stack unwind information not available. Following frames may be
> wrong.
> 62 60 bacc2eac b9c700b4 …
> 63 64 bacc2f10 b9c71090 …
> 64 84 bacc2f94 b9c715b2 …
> 65 64 bacc2ff8 b9c716f8 …
> 66 18 bacc3010 b9c71868 …
> 67 ac bacc30bc b9c6ecb1 …
> 68 14 bacc30d0 b9c720f2 …
> 69 28 bacc30f8 805d047f …
> 6a 144 bacc323c 805d10de nt!PspCreateThread+0x3a7
> 6b 78 bacc32b4 b9c7b5e9 nt!NtCreateThread+0xfc
> 6c b4 bacc3368 8054163c …
> 6d 0 bacc3368 00000000 nt!KiFastCallEntry+0xfc
> 6e 70 bacc33d8 bacc3758 0x0
> 6f 390 bacc3768 806b00df 0xbacc3758
> 70 9c bacc3804 80539acf nt!RtlCreateUserProcess+0x34f
> 71 5d8 bacc3ddc 8054611e nt!_except_handler2+0xb7
> 72 4 bacc3de0 8069790b nt!KiThreadStartup+0x16
> 73 4 bacc3de4 80088000 nt!CreateSystemRootLink+0x4a9
> 74 4 bacc3de8 00000000 0x80088000
>
>
> On Thu, Mar 4, 2010 at 7:48 AM, Doron Holan
> wrote:
>> No you can’t increase the km stack size. I hope you realize that by
>> withholding information/context of what you are doing that the abiltiy
>> for anyone to help goes down dramatically.
>>
>> d
>>
>> tiny phone keyboard + fat thumbs = you do the muth
>>
>>
>>
>> -----Original Message-----
>> From: Jonathon
>> Sent: Thursday, March 04, 2010 7:45 AM
>> To: Windows System Software Devs Interest List
>> Subject: Re: [ntdev] ZwClose + stack overflow?
>>
>> thanks for the quick replies. Unfortunately, I can’t give the rest of
>> the stack. I am actually seeing this issue within the OsrDmk. I’ve
>> implemented a filter driver that sits above the OsrDmk and I notice
>> that when it is freeing file object handles by calling ZwClose, I get
>> that huge stack trace mentioned earlier. I started digging through the
>> source and I noticed that the OsrDmk does a lot of inline closes. One
>> solution I had was to create a worker thread that performed closes on
>> a separate thread. This should drastically reduce the size of the
>> stack since the inline closes are adding about ~3-4kb to my stack. I
>> am not really sure where to go from here. I’ll probably try to
>> implement this solution and see if it helps. Is it possible to
>> increase the kernel stack size by chance? I’d like to increase it a
>> little for testing purposes and see if it fixes the issue.
>>
>> Thanks
>>
>> On Wed, Mar 3, 2010 at 8:29 AM, Scott Noone wrote:
>>> Can you post the entire kf output? Also, before you do the kf make sure
>>> you
>>> do a “.kframes 1000” to set the default stack depth to the max.
>>>
>>> -scott
>>>
>>> –
>>> Scott Noone
>>> Consulting Associate
>>> OSR Open Systems Resources, Inc.
>>> http://www.osronline.com
>>>
>>>
>>> “Jonathon” wrote in message news:xxxxx@ntdev…
>>>>
>>>> Thanks for the reply. Currently, my code is just closing the file
>>>> handle which then produces the stack described earlier. I’ve used the
>>>> “knf” command in windbg which outputs the size of each frame and I’ve
>>>> determined that all the frames after ZwClose ~ 3.5KB. That seems a
>>>> bit much. Are there techniques that I can use to improve this?
>>>>
>>>>
>>>>
>>>> On Wed, Mar 3, 2010 at 8:01 AM, Krzysztof Uchronski
>>>> wrote:
>>>>>
>>>>> KeExpandKernelStackAndCallout (on x64 Win2k3 + >= Vista), but I would
>>>>> check if your driver is not causing this issue. If it does you need to
>>>>> fix it first.
>>>>>
>>>>> Krzysztof Uchronski
>>>>>
>>>>> -----Original Message-----
>>>>> From: Jonathon [mailto:xxxxx@gmail.com]
>>>>> Posted At: Wednesday, March 03, 2010 3:28 PM
>>>>> Posted To: ntdev
>>>>> Conversation: ZwClose + stack overflow?
>>>>> Subject: ZwClose + stack overflow?
>>>>>
>>>>> Hello,
>>>>>
>>>>> I’ve been battling a stack issue for a couple of days now and I’ve
>>>>> figured out what is causing the problem. I noticed in my stack trace
>>>>> that after calling ZwClose on a file, my stack trace looks like
>>>>> something below:
>>>>>
>>>>> Ntfs!_SEH_prolog+0x1a
>>>>> Ntfs!NtfsPrepareBuffers+0x270
>>>>> Ntfs!NtfsNonCachedIo+0x20e
>>>>> Ntfs!NtfsCommonWrite+0x1821
>>>>> Ntfs!NtfsFsdWrite+0xf3
>>>>> nt!IopfCallDriver+0x31
>>>>> fltMgr!FltpDispatch+0x152
>>>>> nt!IopfCallDriver+0x31
>>>>> sr!SrWrite+0xaa
>>>>> nt!IopfCallDriver+0x31
>>>>> fltMgr!FltpLegacyProcessingAfterPreCallbacksCompleted+0x20b
>>>>> fltMgr!FltpDispatch+0x11f
>>>>> nt!IopfCallDriver+0x31
>>>>> nt!IopfCallDriver+0x31
>>>>> fltMgr!FltpDispatch+0x152
>>>>> nt!IopfCallDriver+0x31
>>>>> nt!IoSynchronousPageWrite+0xaf
>>>>> nt!MiFlushSectionInternal+0x3f8
>>>>> nt!MmFlushSection+0x1f2
>>>>> nt!CcFlushCache+0x3a0
>>>>> Ntfs!LfsFlushLfcb+0x227
>>>>> Ntfs!LfsFlushLbcb+0x81
>>>>> Ntfs!LfsFlushToLsnPriv+0xf3
>>>>> Ntfs!LfsFlushToLsn+0x8e
>>>>> Ntfs!NtfsCommitCurrentTransaction+0x215
>>>>> Ntfs!NtfsCompleteRequest+0x1d
>>>>> Ntfs!NtfsCommonCleanup+0x2604
>>>>> Ntfs!NtfsFsdCleanup+0xcf
>>>>> nt!IopfCallDriver+0x31
>>>>> nt!IopCloseFile+0x26b
>>>>> nt!ObpDecrementHandleCount+0xd8
>>>>> nt!ObpCloseHandleTableEntry+0x14d
>>>>> nt!ObpCloseHandle+0x87
>>>>> nt!NtClose+0x1d
>>>>> nt!KiFastCallEntry+0xfc
>>>>> nt!ZwClose+0x11
>>>>>
>>>>> …
>>>>> …
>>>>> …
>>>>>
>>>>> I’ve calculated the amount of stack space that ZwClose consumes and it
>>>>> seems to be about 3.5kb. From what I’ve been reading, my kernel stack
>>>>> is only 12KB, so this is consuming more than 25% of my stack! Is there
>>>>> a common way to deal with this issue? Can I push this task off to
>>>>> another thread (worker thread) possibly using IoAllocateWorkItem,
>>>>> IoQueueWorkItem? Any advice will be greatly appreciated.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> —
>>>>> NTDEV is sponsored by OSR
>>>>>
>>>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>>>> http://www.osr.com/seminars
>>>>>
>>>>> To unsubscribe, visit the List Server section of OSR Online at
>>>>> http://www.osronline.com/page.cfm?name=ListServer
>>>>>
>>>>
>>>
>>> —
>>> NTDEV is sponsored by OSR
>>>
>>> For our schedule of WDF, WDM, debugging and other seminars visit:
>>> http://www.osr.com/seminars
>>>
>>> To unsubscribe, visit the List Server section of OSR Online at
>>> http://www.osronline.com/page.cfm?name=ListServer
>>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>>
>> —
>> NTDEV is sponsored by OSR
>>
>> For our schedule of WDF, WDM, debugging and other seminars visit:
>> http://www.osr.com/seminars
>>
>> To unsubscribe, visit the List Server section of OSR Online at
>> http://www.osronline.com/page.cfm?name=ListServer
>>
>