Hi,
I am developing a transparent encryption/decryption (simple algorithm - doesn’t change the file size) mini-filter using swapbuffer technique (works great by performing data transformations on non-cached and paging IO paths). Now I have a requirement where I need to provide raw/plain access to file data depending on user/application policies administered.
Since, with SwapBuffer technique, cache is always in clear, is there any way to serve the raw data if the file is already cached.
I have following design decisions to make -
- Force non-cached IO in PreCreate if create/open is from a raw user/application
- FastIO unsupported.
- Align the cached IO and copy back the appropriate offset file data (not to break any existing applications)
- I understand the limitation of EFS/compression interoperability with (since they work on the NTFS cache)
- Layered FSD - develop a shadow file object model, where mini-filter creates/own a separate cache with “raw” file data.
Based on your experience, could you please suggest which way (design decision) to pursue further ? Layered FSD do look like a LOT of work.
In the first scenario some simple programs such as Notepad will not work - i.e. anything using memory mapping will not work.
Yeah, #2 seems (and is) harder, but would it be done were it not required?
BTW, doing it is not even 1% of the work - testing it in various scenarios, making sure you do not corrupt the data (and that is most important isn’t it?) or leak data (well it’s a security product) is most of the work.
Dejan.
xxxxx@gmail.com wrote:
Hi,
I am developing a transparent encryption/decryption (simple algorithm - doesn’t change the file size) mini-filter using swapbuffer technique (works great by performing data transformations on non-cached and paging IO paths). Now I have a requirement where I need to provide raw/plain access to file data depending on user/application policies administered.
Since, with SwapBuffer technique, cache is always in clear, is there any way to serve the raw data if the file is already cached.
I have following design decisions to make -
- Force non-cached IO in PreCreate if create/open is from a raw user/application
- FastIO unsupported.
- Align the cached IO and copy back the appropriate offset file data (not to break any existing applications)
- I understand the limitation of EFS/compression interoperability with (since they work on the NTFS cache)
- Layered FSD - develop a shadow file object model, where mini-filter creates/own a separate cache with “raw” file data.
Based on your experience, could you please suggest which way (design decision) to pursue further ? Layered FSD do look like a LOT of work.
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
–
Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
http://www.alfasp.com
File system audit, security and encryption kits.
Hi Dejan,
I am a bit confused in my understanding with your response about memory mapped support.
Memory mapping doesn’t work if you have to provide conflicting multi-view of the data (plain and subsequent plain opens are fine and raw-raw sequence is fine too). Say, If a user/application is given a plain(decrypted) access to data with memory mapped (cache is in clear) then any subsequent access to RAW is to be denied because of conflicting data.
My point is memory mapped support will be limited even with Layered FSD or Forced Non-Cached IO for such conflicting opens. Does it make sense ? I thought above two approaches are equally good. but first approach seems simple to me.
Layered FSD works with memory mapped files correctly.
xxxxx@gmail.com wrote:
Hi Dejan,
I am a bit confused in my understanding with your response about memory mapped support.
Memory mapping doesn’t work if you have to provide conflicting multi-view of the data (plain and subsequent plain opens are fine and raw-raw sequence is fine too). Say, If a user/application is given a plain(decrypted) access to data with memory mapped (cache is in clear) then any subsequent access to RAW is to be denied because of conflicting data.
My point is memory mapped support will be limited even with Layered FSD or Forced Non-Cached IO for such conflicting opens. Does it make sense ? I thought above two approaches are equally good. but first approach seems simple to me.
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
–
Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
http://www.alfasp.com
File system audit, security and encryption kits.
This is what I’ve been referring to as an instance of an “isolation filter” - we’re writing a series of articles about this in The NT Insider.
You must manage the cache in order to get split views. This is what NTFS does as well (different SOP structures for different views of the same file,) so there is certainly precedent for doing this.
If you need to make this work over the network, it will be MUCH more complicated (and it’s virtually impossible to implement it without breaking existing functionality, since multi-client shared write with layered encryption really isn’t viable without a server side assist or the RDR team finally agreeing to let us see the oplock breaks so we can maintain cache coherency properly.
Tony
OSR
Tony,
What exactly are you referring to regarding the network part? Different views on different clients or different views on the same client?
Dejan.
Tony Mason wrote:
This is what I’ve been referring to as an instance of an “isolation filter” - we’re writing a series of articles about this in The NT Insider.
You must manage the cache in order to get split views. This is what NTFS does as well (different SOP structures for different views of the same file,) so there is certainly precedent for doing this.
If you need to make this work over the network, it will be MUCH more complicated (and it’s virtually impossible to implement it without breaking existing functionality, since multi-client shared write with layered encryption really isn’t viable without a server side assist or the RDR team finally agreeing to let us see the oplock breaks so we can maintain cache coherency properly.
Tony
OSR
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
–
Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
http://www.alfasp.com
File system audit, security and encryption kits.
Dejan,
I’m thinking of the “shared spreadsheet” functionality presented by Excel, for example, or Access over an SMB connection, both of which rely upon “shared access” semantics.
The problem is that you have no way to know when another client has updated the data on the server; as such, you cannot invalidate the cache properly. Cache invalidation is the very reason that SMB uses the oplock protocol, but the SMB clients on Windows do not provide any visibility into this protocol and thus isolation drivers or layered file systems cannot properly implement shared access semantics on top of such clients. When RDR owns the cache, it can do the invalidation as necessary.
Of course, even for an encryption filter that does not control the cache (much like the OP described) there’s no way to know when the SMB client reverts to “write through to the server” mode (its possible SMB2 does this differently, I haven’t spent enough time studying it yet.)
Tony
OSR
How does having an isolation filter vs. not having any filter at all change this? (updating the data on the server)
The problem is that you have no way to know when another client has updated the data on the server; as such, you cannot invalidate the cache properly. Cache invalidation is the very reason that SMB uses the oplock protocol, but the SMB clients on Windows do not provide any visibility into this protocol and thus isolation drivers or layered file systems cannot properly implement shared access semantics on top of such clients. When RDR owns the cache, it can do the invalidation as necessary.
Of course, even for an encryption filter that does not control the cache (much like the OP described) there’s no way to know when the SMB client reverts to “write through to the server” mode (its possible SMB2 does this differently, I haven’t spent enough time studying it yet.)
Tony
OSR
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
–
Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
http://www.alfasp.com
File system audit, security and encryption kits.
In the case where there is no filter, the SMB client properly invalidates the cache because it has visibility into the oplock protocol. When a filter controls the cache, it cannot properly invalidate it (or update it.)
Tony
OSR
I see! Good point.
Tony Mason wrote:
In the case where there is no filter, the SMB client properly invalidates the cache because it has visibility into the oplock protocol. When a filter controls the cache, it cannot properly invalidate it (or update it.)
Tony
OSR
NTFSD is sponsored by OSR
For our schedule of debugging and file system seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer
–
Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
http://www.alfasp.com
File system audit, security and encryption kits.
Hi Tony,
Thanks for sharing thoughts on Layered FSD / “isolation view driver”.
Few questions -
- How Layered FSD is different from Isolation driver ? I am confused by
these terms used in forums.
Looks like both are controlling the cache and have respective shadow
file objects.
- Is it possible provide any references to "Shadow File Object’
implementation ?
–Sridhar
On Tue, Mar 8, 2011 at 8:51 AM, Dejan Maksimovic wrote
>
> I see! Good point.
>
> Tony Mason wrote:
>
> > In the case where there is no filter, the SMB client properly invalidates
> the cache because it has visibility into the oplock protocol. When a filter
> controls the cache, it cannot properly invalidate it (or update it.)
> >
> > Tony
> > OSR
> >
> > —
> > NTFSD is sponsored by OSR
> >
> > For our schedule of debugging and file system seminars visit:
> > http://www.osr.com/seminars
> >
> > To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
> –
> Kind regards, Dejan (MSN support: xxxxx@alfasp.com)
> http://www.alfasp.com
> File system audit, security and encryption kits.
>
>
>
> —
> NTFSD is sponsored by OSR
>
> For our schedule of debugging and file system seminars visit:
> http://www.osr.com/seminars
>
> To unsubscribe, visit the List Server section of OSR Online at
> http://www.osronline.com/page.cfm?name=ListServer
>
Sorry missed this post. Very good explanation on shadow file objects
http://www.osronline.com/showthread.cfm?link=153641
–Rad