What could an IFS be doing wrong to delay a close.

Hi all,

I have an IFS (not a filter) that i am testing using ifstest.exe from the
HCT.

I have run into an interesting problem where my FS is behaving differently
thatn NTFS, and I am not sure why.

The ifstest creates lots of files and directories, then, at the end, it trys
to delete the top level directory that all of this stuff was created under.
It works properly in NTFS. Under my IFS, the Create with the Delete flag
fails because there is still a handle to a file down in the tree that has
not been closed (it has been cleaned up).

When I watch this on NTFS using filemon, there is a Create, a Write, a
Cleanup, and a Close. On my FS, there is the same Create, Write, and
Cleanup, but the Close doesn’t happen until much later, probably as the .exe
is exiting.

So something I am doing is causing the CacheManger to hold onto that
reference much longer than under NTFS.

I am hoping that someone might have an idea what I could be doing wrong.

This is what I am doing in Cleanup.

if (FlagOn(fileObject->Flags, FO_CACHE_SUPPORTED) &&
(fcb->NonPaged->SectionObjectPointers.DataSectionObject !=
NULL))
{

CcFlushCache (&fcb->NonPaged->SectionObjectPointers, NULL, 0,
NULL);
MmFlushImageSection (&fcb->NonPaged->SectionObjectPointers,
MmFlushForWrite);
// Grab and release PagingIo to serialize ourselves with the lazy
writer.
// This will work to ensure that all IO has completed on the
cached
// data and we will succesfully tear away the cache section.
ExAcquireResourceExclusiveLite (fcb->Header.PagingIoResource,
TRUE);
ExReleaseResourceLite (fcb->Header.PagingIoResource);
CcPurgeCacheSection (&fcb->NonPaged->SectionObjectPointers,
NULL,
0,
FALSE );
}

Any suggestions would be much appreciated.

Thanks,

Don

Don,

Are you calling CcUninitializeCacheMap() for the FileObject in question?
This is a mandatory action for the last cached open handle. This call
will teardown the associated cache map, hence the reference from the
CacheManager (documentation can be found for the function in the IFSKit)…

  • Ian

Don wrote:

Hi all,

I have an IFS (not a filter) that i am testing using ifstest.exe from the
HCT.

I have run into an interesting problem where my FS is behaving differently
thatn NTFS, and I am not sure why.

The ifstest creates lots of files and directories, then, at the end, it trys
to delete the top level directory that all of this stuff was created under.
It works properly in NTFS. Under my IFS, the Create with the Delete flag
fails because there is still a handle to a file down in the tree that has
not been closed (it has been cleaned up).

When I watch this on NTFS using filemon, there is a Create, a Write, a
Cleanup, and a Close. On my FS, there is the same Create, Write, and
Cleanup, but the Close doesn’t happen until much later, probably as the .exe
is exiting.

So something I am doing is causing the CacheManger to hold onto that
reference much longer than under NTFS.

I am hoping that someone might have an idea what I could be doing wrong.

This is what I am doing in Cleanup.

if (FlagOn(fileObject->Flags, FO_CACHE_SUPPORTED) &&
(fcb->NonPaged->SectionObjectPointers.DataSectionObject !=
NULL))
{

CcFlushCache (&fcb->NonPaged->SectionObjectPointers, NULL, 0,
NULL);
MmFlushImageSection (&fcb->NonPaged->SectionObjectPointers,
MmFlushForWrite);
// Grab and release PagingIo to serialize ourselves with the lazy
writer.
// This will work to ensure that all IO has completed on the
cached
// data and we will succesfully tear away the cache section.
ExAcquireResourceExclusiveLite (fcb->Header.PagingIoResource,
TRUE);
ExReleaseResourceLite (fcb->Header.PagingIoResource);
CcPurgeCacheSection (&fcb->NonPaged->SectionObjectPointers,
NULL,
0,
FALSE );
}

Any suggestions would be much appreciated.

Thanks,

Don


Questions? First check the IFS FAQ at https://www.osronline.com/article.cfm?id=17

You are currently subscribed to ntfsd as: xxxxx@sgi.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Thanks Ian,

Unfortunately, that is not the problem. That call is the next thing in line
after the code that I included.

It is not in the test though, it is always called. So I am still looking for
another problem.

Thanks,

Don

“Ian Costello” wrote in message news:xxxxx@ntfsd…
> Don,
>
> Are you calling CcUninitializeCacheMap() for the FileObject in question?
> This is a mandatory action for the last cached open handle. This call will
> teardown the associated cache map, hence the reference from the
> CacheManager (documentation can be found for the function in the
> IFSKit)…
>
> - Ian
>
> Don wrote:
>
>>Hi all,
>>
>>I have an IFS (not a filter) that i am testing using ifstest.exe from the
>>HCT.
>>
>>I have run into an interesting problem where my FS is behaving differently
>>thatn NTFS, and I am not sure why.
>>
>>The ifstest creates lots of files and directories, then, at the end, it
>>trys to delete the top level directory that all of this stuff was created
>>under. It works properly in NTFS. Under my IFS, the Create with the Delete
>>flag fails because there is still a handle to a file down in the tree that
>>has not been closed (it has been cleaned up).
>>
>>When I watch this on NTFS using filemon, there is a Create, a Write, a
>>Cleanup, and a Close. On my FS, there is the same Create, Write, and
>>Cleanup, but the Close doesn’t happen until much later, probably as the
>>.exe is exiting.
>>
>>So something I am doing is causing the CacheManger to hold onto that
>>reference much longer than under NTFS.
>>
>>I am hoping that someone might have an idea what I could be doing wrong.
>>
>>This is what I am doing in Cleanup.
>>
>>
>> if (FlagOn(fileObject->Flags, FO_CACHE_SUPPORTED) &&
>> (fcb->NonPaged->SectionObjectPointers.DataSectionObject !=
>> NULL))
>> {
>>
>> CcFlushCache (&fcb->NonPaged->SectionObjectPointers, NULL, 0,
>> NULL);
>> MmFlushImageSection (&fcb->NonPaged->SectionObjectPointers,
>> MmFlushForWrite);
>> // Grab and release PagingIo to serialize ourselves with the
>> lazy writer.
>> // This will work to ensure that all IO has completed on the
>> cached
>> // data and we will succesfully tear away the cache section.
>> ExAcquireResourceExclusiveLite (fcb->Header.PagingIoResource,
>> TRUE);
>> ExReleaseResourceLite (fcb->Header.PagingIoResource);
>> CcPurgeCacheSection (&fcb->NonPaged->SectionObjectPointers,
>> NULL,
>> 0,
>> FALSE );
>> }
>>
>>Any suggestions would be much appreciated.
>>
>>Thanks,
>>
>>Don
>>
>>
>>
>>—
>>Questions? First check the IFS FAQ at
>>https://www.osronline.com/article.cfm?id=17
>>
>>You are currently subscribed to ntfsd as: xxxxx@sgi.com
>>To unsubscribe send a blank email to xxxxx@lists.osr.com
>>
>>
>

Don wrote:

Thanks Ian,

Unfortunately, that is not the problem. That call is the next thing in line
after the code that I included.

It is not in the test though, it is always called. So I am still looking for
another problem.

Ok, so with the filemon output that you have, are you observing paging
io’s (lazy writes) from the system between the cleanup and close? Also
apart from the delayed close, is there anything else that you can see is
different with regards to the operations that are occurring during the
equivalent NTFS vs your IFS?

There are possibly a large/huge number of operations that could be
occurring in your create/read/write/cleanup/close paths so the cleanup
itself may not be at fault. Locks (ERESOURCES), are always a good place
to look as well…

  • Ian

Ian,

I have looked carefully over the FileMon output from both file systems.

There is an interesting difference, and one that I don’t understand.

The FileMon output from NTFS shows the following:

162 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE F:\opcreatg\fopenid\fopenid.c
SUCCESS Options: Create Access: All
163 11:47:25 AM ifstest.exe:856 IRP_MJ_WRITE F:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 1000
165 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP F:\opcreatg\fopenid\fopenid.c
SUCCESS
166 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE F:\opcreatg\fopenid\fopenid.c
SUCCESS

skip lots of events…

211 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE F:\opcreatg\fopenid\fopenid.c
SUCCESS Options: Open Access: All
212 11:47:25 AM ifstest.exe:856 IRP_MJ_SET_INFORMATION
F:\opcreatg\fopenid\fopenid.c SUCCESS FileBasicInformation
213 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP F:\opcreatg\fopenid\fopenid.c
SUCCESS
214 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE F:\opcreatg\fopenid\fopenid.c
SUCCESS

From my IFS, I get the following:

134 11:46:12 AM ifstest.exe:3160 IRP_MJ_CREATE
D:\opcreatg\fopenid\fopenid.c SUCCESS Options: Create Access: All
135 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE D:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 1000
136 11:46:12 AM ifstest.exe:3160 IRP_MJ_CLEANUP
D:\opcreatg\fopenid\fopenid.c SUCCESS
137 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE* D:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 4096

skip lots of other events…

155 11:46:12 AM System:4 IRP_MJ_SET_INFORMATION*
D:\opcreatg\fopenid\fopenid.c SUCCESS Length: 4096
156 11:46:12 AM System:4 IRP_MJ_CLOSE D:\opcreatg\fopenid\fopenid.c SUCCESS

As you can see, there are two major differences. First, there is never a
lazy write or an unbuffed IO on the NTFS trace. Second, the NTFS traces
closes write after the cleanup, mine does not.

I don’t understand how the NTFS trace can even work without an unbuffered
write ever happening? This is really confusing to me now.

Thanks,

Don

“Ian Costello” wrote in message news:xxxxx@ntfsd…
>
>
> Don wrote:
>
>>Thanks Ian,
>>
>>Unfortunately, that is not the problem. That call is the next thing in
>>line after the code that I included.
>>
>>It is not in the test though, it is always called. So I am still looking
>>for another problem.
>>
>
> Ok, so with the filemon output that you have, are you observing paging
> io’s (lazy writes) from the system between the cleanup and close? Also
> apart from the delayed close, is there anything else that you can see is
> different with regards to the operations that are occurring during the
> equivalent NTFS vs your IFS?
>
> There are possibly a large/huge number of operations that could be
> occurring in your create/read/write/cleanup/close paths so the cleanup
> itself may not be at fault. Locks (ERESOURCES), are always a good place to
> look as well…
>
> - Ian
>

Don,

I would bet that you are just not waiting around long enough to ‘see’ the
NTFS flush in FileMon. It looks as though all the user references to the
file, in your trace, are being issued so why are you not allowing the
DELETE_ON_CLOSE to proceed?

There maybe an outstanding reference but it is not a user handle reference,
it is a pointer reference. Therefore it would seem that you would allow this
delete to proceed then when you do get other requests, you can fail them
accordingly.

Pete

Kernel Drivers
Windows Filesystem and Device Driver Consulting
www.KernelDrivers.com
(303)546-0300

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Don
Sent: Friday, May 20, 2005 10:13 AM
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] What could an IFS be doing wrong to delay a close.

Ian,

I have looked carefully over the FileMon output from both file systems.

There is an interesting difference, and one that I don’t understand.

The FileMon output from NTFS shows the following:

162 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE F:\opcreatg\fopenid\fopenid.c

SUCCESS Options: Create Access: All
163 11:47:25 AM ifstest.exe:856 IRP_MJ_WRITE F:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 1000
165 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP F:\opcreatg\fopenid\fopenid.c

SUCCESS
166 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE F:\opcreatg\fopenid\fopenid.c
SUCCESS

skip lots of events…

211 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE F:\opcreatg\fopenid\fopenid.c

SUCCESS Options: Open Access: All
212 11:47:25 AM ifstest.exe:856 IRP_MJ_SET_INFORMATION
F:\opcreatg\fopenid\fopenid.c SUCCESS FileBasicInformation
213 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP F:\opcreatg\fopenid\fopenid.c

SUCCESS
214 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE F:\opcreatg\fopenid\fopenid.c
SUCCESS

From my IFS, I get the following:

134 11:46:12 AM ifstest.exe:3160 IRP_MJ_CREATE
D:\opcreatg\fopenid\fopenid.c SUCCESS Options: Create Access: All
135 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE D:\opcreatg\fopenid\fopenid.c

SUCCESS Offset: 0 Length: 1000
136 11:46:12 AM ifstest.exe:3160 IRP_MJ_CLEANUP
D:\opcreatg\fopenid\fopenid.c SUCCESS
137 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE* D:\opcreatg\fopenid\fopenid.c

SUCCESS Offset: 0 Length: 4096

skip lots of other events…

155 11:46:12 AM System:4 IRP_MJ_SET_INFORMATION*
D:\opcreatg\fopenid\fopenid.c SUCCESS Length: 4096
156 11:46:12 AM System:4 IRP_MJ_CLOSE D:\opcreatg\fopenid\fopenid.c SUCCESS

As you can see, there are two major differences. First, there is never a
lazy write or an unbuffed IO on the NTFS trace. Second, the NTFS traces
closes write after the cleanup, mine does not.

I don’t understand how the NTFS trace can even work without an unbuffered
write ever happening? This is really confusing to me now.

Thanks,

Don

“Ian Costello” wrote in message news:xxxxx@ntfsd…
>
>
> Don wrote:
>
>>Thanks Ian,
>>
>>Unfortunately, that is not the problem. That call is the next thing in
>>line after the code that I included.
>>
>>It is not in the test though, it is always called. So I am still looking
>>for another problem.
>>
>
> Ok, so with the filemon output that you have, are you observing paging
> io’s (lazy writes) from the system between the cleanup and close? Also
> apart from the delayed close, is there anything else that you can see is
> different with regards to the operations that are occurring during the
> equivalent NTFS vs your IFS?
>
> There are possibly a large/huge number of operations that could be
> occurring in your create/read/write/cleanup/close paths so the cleanup
> itself may not be at fault. Locks (ERESOURCES), are always a good place to

> look as well…
>
> - Ian
>


Questions? First check the IFS FAQ at
https://www.osronline.com/article.cfm?id=17

You are currently subscribed to ntfsd as: xxxxx@kerneldrivers.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Peter,

I guess you are right that I should allow the delete on close to proceed.
This is difficult for my implementation, so I was trying to find out
why I was getting the close so much later than NTFS.

However, I don’t understand your comment about not waiting around long
enough for NTFS to flush. The file has not just been cleaned up, but it has
been closed. It isn’t possible for flushes to happen after the close is it?

Don

“Peter Scott” wrote in message
news:xxxxx@ntfsd…
>
> Don,
>
> I would bet that you are just not waiting around long enough to ‘see’ the
> NTFS flush in FileMon. It looks as though all the user references to the
> file, in your trace, are being issued so why are you not allowing the
> DELETE_ON_CLOSE to proceed?
>
> There maybe an outstanding reference but it is not a user handle
> reference,
> it is a pointer reference. Therefore it would seem that you would allow
> this
> delete to proceed then when you do get other requests, you can fail them
> accordingly.
>
> Pete
>
> Kernel Drivers
> Windows Filesystem and Device Driver Consulting
> www.KernelDrivers.com
> (303)546-0300
>
>
> -----Original Message-----
> From: xxxxx@lists.osr.com
> [mailto:xxxxx@lists.osr.com] On Behalf Of Don
> Sent: Friday, May 20, 2005 10:13 AM
> To: Windows File Systems Devs Interest List
> Subject: Re:[ntfsd] What could an IFS be doing wrong to delay a close.
>
> Ian,
>
> I have looked carefully over the FileMon output from both file systems.
>
> There is an interesting difference, and one that I don’t understand.
>
> The FileMon output from NTFS shows the following:
>
>
> 162 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Options: Create Access: All
> 163 11:47:25 AM ifstest.exe:856 IRP_MJ_WRITE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS Offset: 0 Length: 1000
> 165 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS
> 166 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
> skip lots of events…
>
> 211 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Options: Open Access: All
> 212 11:47:25 AM ifstest.exe:856 IRP_MJ_SET_INFORMATION
> F:\opcreatg\fopenid\fopenid.c SUCCESS FileBasicInformation
> 213 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS
> 214 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
>
> From my IFS, I get the following:
>
> 134 11:46:12 AM ifstest.exe:3160 IRP_MJ_CREATE
> D:\opcreatg\fopenid\fopenid.c SUCCESS Options: Create Access: All
> 135 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE
> D:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Offset: 0 Length: 1000
> 136 11:46:12 AM ifstest.exe:3160 IRP_MJ_CLEANUP
> D:\opcreatg\fopenid\fopenid.c SUCCESS
> 137 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE*
> D:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Offset: 0 Length: 4096
>
> skip lots of other events…
>
> 155 11:46:12 AM System:4 IRP_MJ_SET_INFORMATION*
> D:\opcreatg\fopenid\fopenid.c SUCCESS Length: 4096
> 156 11:46:12 AM System:4 IRP_MJ_CLOSE D:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
>
> As you can see, there are two major differences. First, there is never a
> lazy write or an unbuffed IO on the NTFS trace. Second, the NTFS traces
> closes write after the cleanup, mine does not.
>
> I don’t understand how the NTFS trace can even work without an unbuffered
> write ever happening? This is really confusing to me now.
>
> Thanks,
>
> Don
>
>
> “Ian Costello” wrote in message news:xxxxx@ntfsd…
>>
>>
>> Don wrote:
>>
>>>Thanks Ian,
>>>
>>>Unfortunately, that is not the problem. That call is the next thing in
>>>line after the code that I included.
>>>
>>>It is not in the test though, it is always called. So I am still looking
>>>for another problem.
>>>
>>
>> Ok, so with the filemon output that you have, are you observing paging
>> io’s (lazy writes) from the system between the cleanup and close? Also
>> apart from the delayed close, is there anything else that you can see is
>> different with regards to the operations that are occurring during the
>> equivalent NTFS vs your IFS?
>>
>> There are possibly a large/huge number of operations that could be
>> occurring in your create/read/write/cleanup/close paths so the cleanup
>> itself may not be at fault. Locks (ERESOURCES), are always a good place
>> to
>
>> look as well…
>>
>> - Ian
>>
>
>
>
> —
> Questions? First check the IFS FAQ at
> https://www.osronline.com/article.cfm?id=17
>
> You are currently subscribed to ntfsd as: xxxxx@kerneldrivers.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>
>

It is not possible but this does not mean that NTFS has not created a stream
fileobject under the covers. And it is this fileobject, which you won’t see
the IRP_MJ_CREATE for in the FileMon trace, that has the outstanding
reference, or more specifically MM has the reference on the fileobject. At
some later point in time, which is completely non-deterministic, you will
see the paging write for the request then you will see the close for the
fileobject.

Pete

Kernel Drivers
Windows Filesystem and Device Driver Consulting
www.KernelDrivers.com
(303)546-0300

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Don
Sent: Friday, May 20, 2005 10:42 AM
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] What could an IFS be doing wrong to delay a close.

Peter,

I guess you are right that I should allow the delete on close to proceed.
This is difficult for my implementation, so I was trying to find out
why I was getting the close so much later than NTFS.

However, I don’t understand your comment about not waiting around long
enough for NTFS to flush. The file has not just been cleaned up, but it has
been closed. It isn’t possible for flushes to happen after the close is it?

Don

“Peter Scott” wrote in message
news:xxxxx@ntfsd…
>
> Don,
>
> I would bet that you are just not waiting around long enough to ‘see’ the
> NTFS flush in FileMon. It looks as though all the user references to the
> file, in your trace, are being issued so why are you not allowing the
> DELETE_ON_CLOSE to proceed?
>
> There maybe an outstanding reference but it is not a user handle
> reference,
> it is a pointer reference. Therefore it would seem that you would allow
> this
> delete to proceed then when you do get other requests, you can fail them
> accordingly.
>
> Pete
>
> Kernel Drivers
> Windows Filesystem and Device Driver Consulting
> www.KernelDrivers.com
> (303)546-0300
>
>
> -----Original Message-----
> From: xxxxx@lists.osr.com
> [mailto:xxxxx@lists.osr.com] On Behalf Of Don
> Sent: Friday, May 20, 2005 10:13 AM
> To: Windows File Systems Devs Interest List
> Subject: Re:[ntfsd] What could an IFS be doing wrong to delay a close.
>
> Ian,
>
> I have looked carefully over the FileMon output from both file systems.
>
> There is an interesting difference, and one that I don’t understand.
>
> The FileMon output from NTFS shows the following:
>
>
> 162 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Options: Create Access: All
> 163 11:47:25 AM ifstest.exe:856 IRP_MJ_WRITE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS Offset: 0 Length: 1000
> 165 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS
> 166 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
> skip lots of events…
>
> 211 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Options: Open Access: All
> 212 11:47:25 AM ifstest.exe:856 IRP_MJ_SET_INFORMATION
> F:\opcreatg\fopenid\fopenid.c SUCCESS FileBasicInformation
> 213 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
> F:\opcreatg\fopenid\fopenid.c
>
> SUCCESS
> 214 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
> F:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
>
> From my IFS, I get the following:
>
> 134 11:46:12 AM ifstest.exe:3160 IRP_MJ_CREATE
> D:\opcreatg\fopenid\fopenid.c SUCCESS Options: Create Access: All
> 135 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE
> D:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Offset: 0 Length: 1000
> 136 11:46:12 AM ifstest.exe:3160 IRP_MJ_CLEANUP
> D:\opcreatg\fopenid\fopenid.c SUCCESS
> 137 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE*
> D:\opcreatg\fopenid\fopenid.c
>
> SUCCESS Offset: 0 Length: 4096
>
> skip lots of other events…
>
> 155 11:46:12 AM System:4 IRP_MJ_SET_INFORMATION*
> D:\opcreatg\fopenid\fopenid.c SUCCESS Length: 4096
> 156 11:46:12 AM System:4 IRP_MJ_CLOSE D:\opcreatg\fopenid\fopenid.c
> SUCCESS
>
>
> As you can see, there are two major differences. First, there is never a
> lazy write or an unbuffed IO on the NTFS trace. Second, the NTFS traces
> closes write after the cleanup, mine does not.
>
> I don’t understand how the NTFS trace can even work without an unbuffered
> write ever happening? This is really confusing to me now.
>
> Thanks,
>
> Don
>
>
> “Ian Costello” wrote in message news:xxxxx@ntfsd…
>>
>>
>> Don wrote:
>>
>>>Thanks Ian,
>>>
>>>Unfortunately, that is not the problem. That call is the next thing in
>>>line after the code that I included.
>>>
>>>It is not in the test though, it is always called. So I am still looking
>>>for another problem.
>>>
>>
>> Ok, so with the filemon output that you have, are you observing paging
>> io’s (lazy writes) from the system between the cleanup and close? Also
>> apart from the delayed close, is there anything else that you can see is
>> different with regards to the operations that are occurring during the
>> equivalent NTFS vs your IFS?
>>
>> There are possibly a large/huge number of operations that could be
>> occurring in your create/read/write/cleanup/close paths so the cleanup
>> itself may not be at fault. Locks (ERESOURCES), are always a good place
>> to
>
>> look as well…
>>
>> - Ian
>>
>
>
>
> —
> Questions? First check the IFS FAQ at
> https://www.osronline.com/article.cfm?id=17
>
> You are currently subscribed to ntfsd as: xxxxx@kerneldrivers.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>
>


Questions? First check the IFS FAQ at
https://www.osronline.com/article.cfm?id=17

You are currently subscribed to ntfsd as: xxxxx@kerneldrivers.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Don,

A couple of observations:

  • The CcUninitializeCacheMap call is not present in the fragment; I’m
    assuming you do this later for all cleanup calls anyway, but do make
    sure you are setting the final size to zero in the case of a deleted
    file (you don’t want any of the data to remain in memory)
  • The logic won’t cause any executable images to be purged, since you
    only check the data section object’s presence. I doubt this is the
    problem in your case…
  • Calling CcFlushCache isn’t really useful for deleted files - why cause
    dirty data writes back to the disk when all you really want to do is
    truncate the file? It isn’t incorrect, just inefficient.

Did you figure out which file wasn’t being closed? Again, knowing if
this is data or executable helps focus the analysis.

Regards,

Tony

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Don
Sent: Friday, May 20, 2005 8:23 AM
To: ntfsd redirect
Subject: [ntfsd] What could an IFS be doing wrong to delay a close.

Hi all,

I have an IFS (not a filter) that i am testing using ifstest.exe from
the
HCT.

I have run into an interesting problem where my FS is behaving
differently
thatn NTFS, and I am not sure why.

The ifstest creates lots of files and directories, then, at the end, it
trys
to delete the top level directory that all of this stuff was created
under.
It works properly in NTFS. Under my IFS, the Create with the Delete flag

fails because there is still a handle to a file down in the tree that
has
not been closed (it has been cleaned up).

When I watch this on NTFS using filemon, there is a Create, a Write, a
Cleanup, and a Close. On my FS, there is the same Create, Write, and
Cleanup, but the Close doesn’t happen until much later, probably as the
.exe
is exiting.

So something I am doing is causing the CacheManger to hold onto that
reference much longer than under NTFS.

I am hoping that someone might have an idea what I could be doing wrong.

This is what I am doing in Cleanup.

if (FlagOn(fileObject->Flags, FO_CACHE_SUPPORTED) &&
(fcb->NonPaged->SectionObjectPointers.DataSectionObject !=
NULL))
{

CcFlushCache (&fcb->NonPaged->SectionObjectPointers, NULL, 0,
NULL);
MmFlushImageSection (&fcb->NonPaged->SectionObjectPointers,
MmFlushForWrite);
// Grab and release PagingIo to serialize ourselves with the
lazy
writer.
// This will work to ensure that all IO has completed on the
cached
// data and we will succesfully tear away the cache section.
ExAcquireResourceExclusiveLite (fcb->Header.PagingIoResource,
TRUE);
ExReleaseResourceLite (fcb->Header.PagingIoResource);
CcPurgeCacheSection (&fcb->NonPaged->SectionObjectPointers,
NULL,
0,
FALSE );
}

Any suggestions would be much appreciated.

Thanks,

Don


Questions? First check the IFS FAQ at
https://www.osronline.com/article.cfm?id=17

You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Don,

Just saw this note as well. My guess is you aren’t setting the size of
the file to zero when you call CcUninitializeCacheMap - check out
FastFat, it does this, and I suspect it also has a similar I/O pattern
to this case in NTFS.

Regards,

Tony

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Don
Sent: Friday, May 20, 2005 12:13 PM
To: ntfsd redirect
Subject: Re:[ntfsd] What could an IFS be doing wrong to delay a close.

Ian,

I have looked carefully over the FileMon output from both file systems.

There is an interesting difference, and one that I don’t understand.

The FileMon output from NTFS shows the following:

162 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
F:\opcreatg\fopenid\fopenid.c
SUCCESS Options: Create Access: All
163 11:47:25 AM ifstest.exe:856 IRP_MJ_WRITE
F:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 1000
165 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
F:\opcreatg\fopenid\fopenid.c
SUCCESS
166 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
F:\opcreatg\fopenid\fopenid.c
SUCCESS

skip lots of events…

211 11:47:25 AM ifstest.exe:856 IRP_MJ_CREATE
F:\opcreatg\fopenid\fopenid.c
SUCCESS Options: Open Access: All
212 11:47:25 AM ifstest.exe:856 IRP_MJ_SET_INFORMATION
F:\opcreatg\fopenid\fopenid.c SUCCESS FileBasicInformation
213 11:47:25 AM ifstest.exe:856 IRP_MJ_CLEANUP
F:\opcreatg\fopenid\fopenid.c
SUCCESS
214 11:47:25 AM ifstest.exe:856 IRP_MJ_CLOSE
F:\opcreatg\fopenid\fopenid.c
SUCCESS

From my IFS, I get the following:

134 11:46:12 AM ifstest.exe:3160 IRP_MJ_CREATE
D:\opcreatg\fopenid\fopenid.c SUCCESS Options: Create Access: All
135 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE
D:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 1000
136 11:46:12 AM ifstest.exe:3160 IRP_MJ_CLEANUP
D:\opcreatg\fopenid\fopenid.c SUCCESS
137 11:46:12 AM ifstest.exe:3160 IRP_MJ_WRITE*
D:\opcreatg\fopenid\fopenid.c
SUCCESS Offset: 0 Length: 4096

skip lots of other events…

155 11:46:12 AM System:4 IRP_MJ_SET_INFORMATION*
D:\opcreatg\fopenid\fopenid.c SUCCESS Length: 4096
156 11:46:12 AM System:4 IRP_MJ_CLOSE D:\opcreatg\fopenid\fopenid.c
SUCCESS

As you can see, there are two major differences. First, there is never a

lazy write or an unbuffed IO on the NTFS trace. Second, the NTFS traces
closes write after the cleanup, mine does not.

I don’t understand how the NTFS trace can even work without an
unbuffered
write ever happening? This is really confusing to me now.

Thanks,

Don

“Ian Costello” wrote in message news:xxxxx@ntfsd…
>
>
> Don wrote:
>
>>Thanks Ian,
>>
>>Unfortunately, that is not the problem. That call is the next thing in

>>line after the code that I included.
>>
>>It is not in the test though, it is always called. So I am still
looking
>>for another problem.
>>
>
> Ok, so with the filemon output that you have, are you observing paging

> io’s (lazy writes) from the system between the cleanup and close? Also

> apart from the delayed close, is there anything else that you can see
is
> different with regards to the operations that are occurring during the

> equivalent NTFS vs your IFS?
>
> There are possibly a large/huge number of operations that could be
> occurring in your create/read/write/cleanup/close paths so the cleanup

> itself may not be at fault. Locks (ERESOURCES), are always a good
place to
> look as well…
>
> - Ian
>


Questions? First check the IFS FAQ at
https://www.osronline.com/article.cfm?id=17

You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to xxxxx@lists.osr.com