Re[2]: A solution for file system encryption with per process access restrictions

For the most part, these flags are honored by existing MSFT file
systems, with the exceptions noted previously. For Tony’s point, he is
simply stating that an underlying file system is not ‘required’ to honor
the flags and for correctness, only write through is needed for these.
That said, they would not even have to honor this latter requirement …
but again this is a general statement.

Pete


Kernel Drivers
Windows File System and Device Driver Consulting
www.KernelDrivers.com
866.263.9295

------ Original Message ------
From: “Mike Boucher”
To: “Windows File Systems Devs Interest List”
Sent: 10/11/2016 11:39:16 AM
Subject: Re: [ntfsd] A solution for file system encryption with per
process access restrictions

> >The point is that the underlying FSD is not required to honor the
>FO_NO_INTERMEDIATE_BUFFERING
>
>It seems like there ought to be a way to force it to honor
>FO_NO_INTERMEDIATE_BUFFERING if only so that the test group at
>Microsoft can verify that a particular test is taking a code path that
>they want to test. To be clear, I’m absolutely positive that Tony and
>the others are correct and that it is not required to honor any
>particular flag. I’m just sort of intrigued to think about the testing
>problem that this approach creates and i wonder how they solve that
>problem.
>
>On Tue, Oct 11, 2016 at 9:59 AM, Tony Mason wrote:
>>


>>
>>The point is that the underlying FSD is not required to honor the
>>FO_NO_INTERMEDIATE_BUFFERING or IRP_NOCACHE if it does not meet the
>>file system’s needs or requirements. From a correctness perspective,
>>all that is really required is write-through behavior.
>>
>>In my experience the real issue is when people try to do this on top
>>of the network, where the caching behavior changes dynamically based
>>upon the state of communications between the client and the server.
>>
>>Tony
>>OSR
>>
>>
>>—
>>NTFSD is sponsored by OSR
>>
>>
>>MONTHLY seminars on crash dump analysis, WDF, Windows internals and
>>software drivers!
>>Details at http:
>>
>>To unsubscribe, visit the List Server section of OSR Online at
>>http:
>
>— NTFSD is sponsored by OSR MONTHLY seminars on crash dump analysis,
>WDF, Windows internals and software drivers! Details at To unsubscribe,
>visit the List Server section of OSR Online at</http:></http:>

Thanks for the answer Slava,
I don’t uderstand some points clearly though.
The article says the multiple view is needed to prevent unauthorized access + propper data view for memory mapped IO,
While you say it is only needed for propper view and unauthorized access can be blocked in preCreate.

From what you say it comes to my mind that why we need the multiple view then?
We can just store plain/decrepted data in cash and prevent unauthorized access in preCreate. Since data decreption is done in non-cached IO, then memory mapped IO needs no special treatment.
Am I wrong?

Our current encryption toolkit (FESF) is based upon an Isolation Filter (our previous toolkit was as well, but this time it’s a separable filter that offers arbitrary view management for other uses).

One common usage model is "when someone attaches a document to an e-mail, if it is encrypted *leave it encrypted*. So you edit a document (decrypted, in the cache) and now you attach it to your e-mail (encrypted, in a different view). The recipient can decrypt it if they have the tools and keys. If they don’t, they have a blob of randomized data. That’s a clear usage for “multiple views”.

Similar usage cases: Explorer access to an encrypted file is given encrypted (“raw”) access. Thus a copy/paste of the file leaves it in encrypted state. When the SMB server reads a file, it normally gets an encrypted copy of the data so the data is sent encrypted *on the wire*. That’s not required (you can configure it to send the decrypted data) but that’s the most common usage model.

Plenty of times when you want to allow both encrypted and decrypted access simultaneously.

Tony
OSR

The security or access rights must be checked when object ( file, process, thread ) handle is being created. Trying to control access in read/write/memory mapping is incorrect and results in bad user experience.

Memory mapping doesn’t require special processing as it is not possible to map file for read/write without file handle opened for read/write access.

If your driver provides different views for the same path ( i.e. FILE_OBJECT->FsContext is not equal for encrypted and decrypted views ) then you don’t need to do any special processing for memory mapped files.

Thanks for the replies, Slava, Tony