Hi,
I’ve been reading about oplocks and I became curious about something, maybe someone has some background on this subject.
Supposing a file is opened over SMB and its content is mapped into the memory by an application. Assuming the same file is being accessed remotely by another system, no oplock would be granted and every I/O should go through the wire.
Just to be sure. Any change the application does on the file via memory write would result on a page size change requested by the Modified Page Writer thread, right?
The main point is about read requests. The first time the application reads the memory locally, a page fault would occur and the VMM would ask FSD to feed the page by reading the file through the wire. That page will remain on memory until the VMM decides to and during this time any memory read on this page would not cause any other page fault. Am I missing anything?
But what happens if the same file is changed on the server?
I’m assuming the client side will not be notified about this remote change (since there is no oplock granted) and the local application will not see the change until the next page fault occurs. Is that correct?
Thanks in advance,
Fernando Roberto da Silva
DriverEntry Kernel Development
http://www.driverentry.com.br
I did some tests that seemed to confirm my original idea.
Actually the data coherency is not lost only on cases with remote (on the server) writes followed by local reads, but also with local writes followed by remote reads.
Because any local write to the map would be sent to the media by the Modified Page Writer thread (which is asynchronous), the data is not guaranteed to be on the server right after the access.
Any thoughts?
Thanks in advance,
Fernando Roberto da Silva
DriverEntry Kernel Development
http://www.driverentry.com.br
Just to close this thread:
The Microsoft documentation assumes data incoherence on remote file maps according to the following statement:
“Although CreateFileMapping works with remote files, it does not keep them coherent. For example, if two computers both map a file as writable, and both change the same page, each computer only sees its own writes to the page. When the data gets updated on the disk, it is not merged.”
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366537(v=vs.85).aspx
Thanks,
Fernando Roberto da Silva
DriverEntry Kernel Development
http://www.driverentry.com.br
I would tell my students that if they planned to map a file, it would be
more robust if they opened it with no sharing permitted. In addition to
the incoherence between mappings on different computers, coherence is not
guaranteed with ReadFile and WriteFile. And file mappings do not honor
file locks.
joe
Just to close this thread:
The Microsoft documentation assumes data incoherence on remote file maps
according to the following statement:
“Although CreateFileMapping works with remote files, it does not keep them
coherent. For example, if two computers both map a file as writable, and
both change the same page, each computer only sees its own writes to the
page. When the data gets updated on the disk, it is not merged.”
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366537(v=vs.85).aspx
Thanks,
Fernando Roberto da Silva
DriverEntry Kernel Development
http://www.driverentry.com.br
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
A common mistake I have observed when people start working with file systems and file system filter drivers is that they want to “fix” broken applications.
The “memory mapped file coherence” issue is a common example of this - applications that allow shared access are inherently agreeing to allow this sort of incoherent behavior because that’s how the interface is defined. The file systems do a decent job of maintaining coherency these days, but it’s still not guaranteed. Thus, you cannot fix it in your FSD or filter, and attempts to do so generally lead to untenable solutions.
Another example I see regularly is “how do I guarantee ordering for asynchronous overlapped I/O operations”. The outcome here isn’t defined - it’s more like a “probability cloud” of outcomes. I’ve heard people describe fixing this in a filter by imposing synchronous ordering - which is great, except that it obliterates performance for the cases in which the application relies upon asynchronous behavior to obtain performance. Fortunately, nobody cares if you turn their SQL server or Exchange database into a dog, right?
Don’t worry about fixing software that’s broken.
Tony
OSR