I have the same issue discussed in the old thread http://www.osronline.com/showThread.cfm?link=177359.
But my problem is that even after I put CcPurgeCacheSection in each MJ_CLEANUP path, I still got IRP_MJ_CLOSE later from system lazy writer, not from the same user process right after CLEANUP. I got this late MJ_CLOSE whenever cache is enabled (CcInitializeCacheMap is called in either cached READ or WRITE path). There’s no mapped view created in my test cases.
Anybody knows the reason?
Thanks in advance
First are the CcPurgeCacheSection calls successful? Second, why do you
care when the close occurs? IRP_MJ_CLOSE will not be in the context of
the process, that is a given, but again why do you care?
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
“xxxxx@eruces.com” wrote in message news:xxxxx@ntfsd:
> I have the same issue discussed in the old thread http://www.osronline.com/showThread.cfm?link=177359.
>
> But my problem is that even after I put CcPurgeCacheSection in each MJ_CLEANUP path, I still got IRP_MJ_CLOSE later from system lazy writer, not from the same user process right after CLEANUP. I got this late MJ_CLOSE whenever cache is enabled (CcInitializeCacheMap is called in either cached READ or WRITE path). There’s no mapped view created in my test cases.
>
> Anybody knows the reason?
>
>
> Thanks in advance
Yes, CcPurgeCacheSection call is successful. When I traced in kernel I found that it seems CcPurgeCacheSection always tries to queue SharedCacheMap deletion ( by calling CcScheduleLazyWriteScan) into lazy writer.
The reason I care about the time when CLOSE occurs is because for this particular FSD, there is some information kept in FCB which if not released ASAP, it could lead to failed CREATE. Sure the design is flawed, but at this time, I just want to make it behaves the same as NTFS or FAT for normal cached files.
Thanks
> But my problem is that even after I put CcPurgeCacheSection in each
MJ_CLEANUP path, I still got IRP_MJ_CLOSE later from system lazy writer,
not from the same user process right after CLEANUP. I got this late
MJ_CLOSE whenever cache is enabled (CcInitializeCacheMap is called in
either cached READ or WRITE path). There’s no mapped view created in my
test cases.
Anybody knows the reason?
Yup. The CcPurge doesn’t do the work inline the final tear down is posted
to the lazy writer (as you found out).
Why does this matter to you?
Well because of content of FCB, I want to my FSD behaves the same as NTFS for tearing down cached map and data section object.
I just cannot figure out why CLOSE comes immediately after CLEANUP in the same user process for NTFS, but not for my FSD. Could it be reference count related? Compared pointer count of file object, OpenCount of shared cache map, control_area, …, still cannot tell why there’s the difference.
Close does not come immediately after CLEANUP for most cases in NTFS.
How are you making this determination?
Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
“xxxxx@eruces.com” wrote in message news:xxxxx@ntfsd:
> Well because of content of FCB, I want to my FSD behaves the same as NTFS for tearing down cached map and data section object.
>
> I just cannot figure out why CLOSE comes immediately after CLEANUP in the same user process for NTFS, but not for my FSD. Could it be reference count related? Compared pointer count of file object, OpenCount of shared cache map, control_area, …, still cannot tell why there’s the difference.
> Yup. The CcPurge doesn’t do the work inline the final tear down is posted
to the lazy writer (as you found out).
Then CcWaitForCurrentLazyWriterActivity is a solution.
–
Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com
> I just cannot figure out why CLOSE comes immediately after CLEANUP
in the same user process for NTFS, but not for my FSD.
In general, this is because NTFS uses its own file object to back the cache.
Check this out by looking at the file object used for paging read and
paging write…
I’ve not looked at it, but I’d guess that the first call to
CcinitializeCacheMap that NTFS makes is with a FileObject that it has
created itself.
Of course, this means that there is another close which happens after the
last user file object is closed. so …
[…] I want to my FSD behaves the same as NTFS for tearing down cached
map
and data section object.
As far as timing is concerned it will be the same - you’ll just see extra
FileObjects (and extra closes) happening on NTFS…