Well, I thought it was a cardinal rule that for any read/write performed
on a file, a filesystem filter will ALWAYS see at least one paging I/O
or non-cached I/O request for that data. Not so, apparently. I was
playing around with files on a network share and wondered why I was
seeing hundreds of ordinary writes when I copied a file to the share,
but no paging-I/O writes. This is bad since I’m writing an encryption
filter driver, and so my filter MUST always handle a given read/write
once and ONLY once. Not zero times, not twice, but once. To accomplish I
handling paging/non-cached I/O requests only, as suggested in previous
postings to ntfsd.
Eventually I figured out this was because network redirectors like to
set an internal flag called SRVOPEN_FLAG_DONTUSE_WRITE_CACHEING when a
file is opened for write-only, which causes the redirector to send all
writes across the network as soon as it gets them, bypass the NT cache.
This means any layered filter will see the ordinary write request, but
never a corresponding paging-I/O request. To get around this, my filter
now has to forcibly turn every write-only network file open into a
read/write open.
The reason why I’m ranting in public is that seems that I can never know
a-priori whether I will see a read or write request as both paging and
non-paging I/O, or one or the other, for a given filesystem. Instead, I
must special case my code for each filesystem and pray that I’ve covered
every scenario that can result in my not handling a read/write or
handling it twice. The only alternative I can come up with is to force
ALL reads/writes to a filtered file to be non-cached, with the
corresponding performance penalties. Is there an elegant way out of this
mess?
- Nicholas Ryan
You are currently subscribed to ntfsd as: $subst(‘Recip.EmailAddr’)
To unsubscribe send a blank email to leave-ntfsd-$subst(‘Recip.MemberIDChar’)@lists.osr.com