Obtain dirty pages of section

Is there any way of obtaining the dirty pages for a given section in kernel mode? For user mode it exists GetWriteWatch, but ZwGetWriteWatch is not exported for system drivers.

I haven’t found any examples for CcSetLogHandleForFile / CcGetDirtyPages usage, so I don’t undertand how they work or if they would fit my needs.

For a memory mapped file I would like to know which of it’s pages are dirty, so I can take some action even before receiving mj_writes.

Any help apreciated, thx!

You can do those chores while processing paging write when the system will provide an MDL with dirty pages.

There is an inherent race condition with dirty pages as they are processed through the Page Frame Number(PFN) database. It usually doesn’t make sense to do something with dirty pages outside paging write processing.

Regarding memory mapped files a physical page in the PFN database is not marked as dirty immediately on write. First a process address space PTE which maps the page is marked as dirty. Some time later the dirty flag is copied from the process PTE to a page descriptor(PFN). Then the mapped page writer thread is invoked to gather and flush dirty pages.

That means that there might be dirty pages that will not be reported to a driver if it fetches dirty pages from the PFN database. Actually, the only way for a driver to know that there is no dirty pages left for a memory mapped file is when it receives the last IRP_MJ_CLOSE for the data stream.

Well, I want to assure for a memory mapped file, that the data I’m viewing is the original one and I don’t want to wait to the modified page writer because I might want to access the data at any moment. I would need some way of doing this:

  1. Is page dirty?
  2. If page isn’t dirty, copy data elsewhere
  3. Check again that page isn’t dirty, so I know the data I’ve copyed is valid (dunno if it’s possible to synch this data copy to avoid this check)

I doubt you can do this without modifying the kernel, particularly the Memory Manager.

The page might be marked as clean in PFN database but marked as dirty in some process PTE. There is a time lag between marking PTE as dirty by a CPU and the dirty flag transfer to a page descriptor by the Memory Manager. The system uses a page descriptor to report dirty flag to external callers.

I think traversing process page tables is out of question though possible by issuing a DPC on each CPU, locking each CPU at DISPATCH_LEVEL and traversing page tables, this prevents user mode from running a code that modifies page tables. Interestingly the kernel mode portion of page tables can still be modified inside ISR.

BTW you can use FsRtlCreateSectionForDataScan to map a file in a user mode service or System process low half space and have an actual view of a memory mapped file to work with from a service or a system thread. The same is achieved by the simple cached read. You won’t know which page is dirty but you will have access to most recent data changes.

It would also be possible with a hypervisor, but it is out of scope.

I wouldn’t care too mutch about the PFN database, when you have a file mapped, all processes with that file open will have the same physical pages for that file (yes, when they are present) except those who mapped the file with COW, but if they write, theire page would be “decoupled” and I wouldn’t have to worry about it.

I wanna do a copy on clean, but without proper Mm support (or going to bizarre solutions), doesn’t look feasyble.

If file system, Cc and Mm are soy tighly coupled, It would be nice to have some more information from the Mm part (I’m not going to say “have more control” because ms guys would scream).

The PTE dirty flag is set by CPU. The system can’t have real time response on each page write access as it is performed by CPU without calling any operating system routines for resident pages with valid PTEs.

Yes, but I would be happy if I could call ZwGetWriteWatch.

Anyway, I’ll try other solution based on file reads, as it is the only time I can be sure read pages are “clean”.

Thx Slava!