Hi,
I am writing a minifilter driver that need to intercept the write operation on the files. I have created a stream context to track the file from file create operation till last close operation. It goes fine when the file is openend from application as normal file io API but when the file is opened and mapped to memory, i am just receiving the last close operation on closing the file but missing write operation that is done on mapped file. Since i received the last close, i am deleting the stream context. Further even if the lazy writer issues a disk IO, i will not be able to intecept as i have lost the context.
Is it possible to hook to paging IO from minifilter driver so that I can track the file update done on memory mapped file?
Track till IRP_MJ_CLOSE and not merely IRP_MJ_CLEANUP. Also, make sure that
you don’t’ use the FLTFL_OPERATION_REGISTRATION_SKIP_PAGING_IO flag while
registering your write callback.
Regards,
Ayush Gupta
AI Consulting
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Monday, February 01, 2010 3:56 PM
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Filter write on memory mapped file
Hi,
I am writing a minifilter driver that need to intercept the write operation
on the files. I have created a stream context to track the file from file
create operation till last close operation. It goes fine when the file is
openend from application as normal file io API but when the file is opened
and mapped to memory, i am just receiving the last close operation on
closing the file but missing write operation that is done on mapped file.
Since i received the last close, i am deleting the stream context. Further
even if the lazy writer issues a disk IO, i will not be able to intecept as
i have lost the context.
Is it possible to hook to paging IO from minifilter driver so that I can
track the file update done on memory mapped file?
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
Thanks for the reply. I am keeping track of reference count in the stream context from creation time of the stream and decrement the count in each cleanup. Once the reference count reaches to 0, in IRP_MJ_CLOSE i am deleting the context. But surprisingly when i open the file the context is created. When i map the file and modify the data and close the handle, I am getting the cleanup and the last close. But not the write operation. And I am not using “FLTFL_OPERATION_REGISTRATION_SKIP_PAGING_IO” during my filter registration.
Why are you “deleting” the stream context? Contexts are managed quite
efficiently by the filter manager based on their reference counts.
My best guess is that there is a previously opened FO which Cc and Mm are
using for their magic. Now, since you might not have tracked that and simply
creating on each create and then deleting explicitly based on “your
reference counting on that FO”, you might be losing the writes that might
come much later on the original FO since you “explicitly” deleted the stream
context.
Regards,
Ayush Gupta
AI Consulting
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Monday, February 01, 2010 5:35 PM
To: Windows File Systems Devs Interest List
Subject: RE:[ntfsd] Filter write on memory mapped file
Thanks for the reply. I am keeping track of reference count in the stream
context from creation time of the stream and decrement the count in each
cleanup. Once the reference count reaches to 0, in IRP_MJ_CLOSE i am
deleting the context. But surprisingly when i open the file the context is
created. When i map the file and modify the data and close the handle, I am
getting the cleanup and the last close. But not the write operation. And I
am not using “FLTFL_OPERATION_REGISTRATION_SKIP_PAGING_IO” during my filter
registration.
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
Thanks. When I removed deleting the context, I received the context during write operation for lazy write.
But I need to track the last close operation and notify to application that last close occurred and notify if there is any modification on the file. Will the context will not be deleted after last close operation? I went through the CTX sample and found that the context will be deleted for a stream only during unload of the driver but not after last reference of the stream. (For example I run CTX and open a text file from Xapp, modify data and close. Context is created but deleted only during unload of driver)In that case how efficiently I can use the filter manager managing the context? Keeping the context even after completing the operation on stream will not be a overhead to kernel?
My current implementation is keep a count variable in the stream context created during CREATE and interment for each Create and decrement for each Cleanup and in the Close check if the variable is 0. If it is delete the stream context so that for next Create a new context is created. Hence I am missing the context in lazy write.
Even I found some time that after lazy write the close is not getting called. In this case how to track the close operation and notify about the changes?
Hello Abhiman,
Could you please explain why you need to know when the last IRP_MJ_CLOSE happened for the stream ? Also, please note that cache manager (and any component outside the file system for that matter) needs to use a FILE_OBJECT to issue requests to the file system and so you should never see a write (paging or otherwise) after the last IRP_MJ_CLOSE on the stream. They don’t need a handle (so you might see such operations after IRP_MJ_CLEANUP) but they need the object so there should at least one IRP_MJ_CLOSE after that.
Filter manager’s stream context is tightly coupled with the file system’s SCB. It will be automatically torn down when the file system tears down its SCB (or, as you have noticed, when the minifilter unloads). There are two things that delay this.
First, some component (like we discussed it above; usually MM, but could be anything, including a filter) holding on to a FILE_OBJECT for a while after the user has actually closed it. Since they use the FILE_OBJECT and not a handle this might introduce a delay between IRP_MJ_CLEANUP and IRP_MJ_CLOSE… So the final IRP_MJ_CLOSE might be called a long time after the user has closed their handle.
Then, there is a file system. From a file system’s perspective, things look like this (this is very high level):
- On receiving IRP_MJ_CREATE -> Figure out which file or stream the create is talking about and see if you already have an SCB for it. If one exists, increment some internal reference count and return it in FsContext. Else, allocate a new one, initialize it and return it in FsContext.
- On receiving IRP_MJ_CLOSE -> Figure out if this is the last reference to the SCB that was given to IO manager (basically, if this is the last reference to the SCB outside of the file system). If so then the stream could be torn down right away. However, some file systems implement a cache (because allocating and initializing the SCB can be expensive) and instead of tearing the SCB down they might simply add it to the cache. In that case the SCB can be reused if a new IRP_MJ_CREATE is received for the same stream (incidentally, this is why it is important to always leave the StreamContext in a consistent state, even if bookkeeping indicates that this was the last IRP_MJ_CLOSE for the stream). If no IRP_MJ_CREATE is received for a while then the SCB can be removed from the cache and torn down (and this is when the minifilter stream context cleanup routine will be called).
Regards,
Alex.
This posting is provided “AS IS” with no warranties, and confers no rights.
Thanks for the valuable information. I am working on a backup application. Here the driver needs to notify to application about the I/O happened on file by several application only at the end. For example I open test.xxx file from 2 application where both apps open the file in write mode and both writes to the file. I need to inform to backup application only when both application close the handle on the file.
Also I need to track the memory mapped file as well. With the inputs I carried an experiment on 32 and 64 bit machines. I created a memory mapped file on both machines and run Filemon. I noticed the write in 32 bit machine in around 5 minutes and a close around 1 hr 20 min. But in 64 bit machine I found write in around 4 min and did not find the close and waited for more than 10 hrs. As write is arbitrary because of lazy writer is the close is also arbitrary? Who calls the last close operation in this case?
I have one more question(may be ridiculous). What happens when the filter driver allocates huge chunk of data in the stream context(may be needed for book keeping info)? If this driver creates stream context to all the files, will it not be a performance issue on the OS? Is there any limit on the size of the driver context structure?
>arbitrary because of lazy writer is the close is also arbitrary? Who calls the last close operation in this
case?
Probably it is only called due to memory pressure or volume dismount.
–
Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com
Thanks a lot. I understood the correct implementation of contexts. Since I need to inform to the user application about change of the data, can I relay on the stream handle context that is created with write access in FileObject?
In my experiment I created a stream context when a first create happens and create the stream handle context only if the desired access has write attributes. Then for all IO operations (Except Write) I will check if the stream handle context is present and update the book keeping info in stream context. Without this write access check to create stream handle context, I found that none of the stream handle will write the data to the file except the one which has write access during create. I found this logic works on wordpad where the file is created with write access and it will receive cleanup and then close. But for notepad application this stream handle doesn?t get the close(as it is using memory mapped). In that case can I conclude the cleanup operation on this stream handle as the ?only one? which modified the data of the file and notify the application that data has changed? Will there be any modification on the file after cleanup and before close if the file is not opened by any other application?
Is there any other possibility of data/attribute modification on the file which don?t involve the stream handle be created?
I found CreateFileMapping doesn?t involve creation of stream handle. In that case how can I track file update notification from create? I know that when write happens, I can use stream context but is there any way where I can track it using stream handle context?
>Will there be any modification on the file after cleanup and before close if the file is not opened by any
other application?
For memory-mapped file - yes.
–
Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com
I Has same problem
but is in sfilter…txt file