Detecting file deletion is a horrible mess from a mini-filter and,
ultimately, cannot be 100% achieved correctly.
Let’s look at FAT first (ergo, let’s pretend streams don’t exist.) If
you open the file with FILE_DELETE_ON_CLOSE the attribute to delete the
file is set in the CCB.
{
PCCB Ccb;
Ccb = (PCCB)FileObject->FsContext2;
//
// Mark the DeleteOnClose bit if the operation was
successful.
//
if ( DeleteOnClose ) {
SetFlag( Ccb->Flags, CCB_FLAG_DELETE_ON_CLOSE );
}
If you open the file for DELETE access and then set (or clear) the
disposition, the delete the file bit is set (or cleared) in the FCB:
//
// At this point either we have a file or an empty directory
// so we know the delete can proceed.
//
SetFlag( Fcb->FcbState, FCB_STATE_DELETE_ON_CLOSE );
FileObject->DeletePending = TRUE;
The problem with this behavior is that it creates a situation in which
the filter can never know if the file is really being deleted, since it
can be “undeleted” right up to the last moment (e.g., it could be
“undeleted” by a filter below you even.) However, this implementation
is peculiar in that it means if you open it “DELETE_ON_CLOSE” an
application can never directly reverse that decision (because set/clear
disposition impacts the FCB and not the CCB.) The CCB bit is passed to
the FCB on cleanup:
//
// Do a check here if this was a DELETE_ON_CLOSE FileObject,
and
// set the Fcb flag appropriately.
//
if (FlagOn(Ccb->Flags, CCB_FLAG_DELETE_ON_CLOSE)) {
ASSERT( NodeType(Fcb) != FAT_NTC_ROOT_DCB );
SetFlag(Fcb->FcbState, FCB_STATE_DELETE_ON_CLOSE);
Of course, this leads to really crazy scenarios: an application opens
the file (normally) and specifies shared delete semantics. It then
opens the file (delete on close) in a consistent sharing manner.
Depending upon whether or not the first or second handle gets closed,
you may or may not see the file “stop being deleted”.
I logged a bug against this about five years ago for several reasons:
(1) it’s not the same as what happens in NTFS; (2) it’s almost
impossible to explain to people why they see erratic behavior based upon
“order in which the handle gets closed”; (3) it makes detecting deletion
(and handling it) in a filter driver even more challenging (well,
probably not, but at first blush it looks that way.)
No doubt to spite me, the NTFS semantics (which were rational) have been
changed in recent releases to match the FAT semantics (so we now have
two irrational, “depends on the order of close” and “screw you filter
drivers” approaches to the world.)
Prior to the “let’s make NTFS behave more erratically and
inconsistently” change, NTFS used the FCB for the “delete on close”
option so an application could “change its mind” before the handle was
closed in either case. Thus, it had a consistent and understandable
behavior pattern…
Ah, but I started off by saying “let’s ignore streams.” Now let’s stop
ignoring them.
If I open a file without a stream specified (on NTFS, your mileage may
vary with UDFS and RDR, both of which also support streams,) I am
opening the “default data stream.” It’s the same as specifying
“::$DATA” to the end of the file name. It has special behavior. As a
filter driver, though, I’m watching stream contexts (Vista does give you
file contexts and you can build them yourself - sort of - in prior
versions.) So here’s the weird part: opening the default data stream in
specific modes may delete the alternate data streams.
Here’s a specific example: if I open the “:Zone_Identifier:$DATA” stream
normally, and then I open the “::$DATA” stream with Overwrite
disposition, when I close the zone identifier stream, it will be
deleted. If I instead opened with supersede, the open would fail.
Thus, the stream could be deleted without anyone ever explicitly
deleting it…
But maybe you don’t care about streams being deleted. Maybe you only
care about files being deleted. I’m not sure what the behavior of
various dispositions would be when combined with ADS access, but I’ve
always assumed they will defy comprehension.
What does this all mean? It means that you cannot reliably know when
files (or streams) are deleted. A filter below you could change the
behavior. The order in which handles get closed may mean that one time
the file has been undeleted and another time it has not. If you query
on IRP_MJ_CLEANUP to see if it is delete pending, there is no guarantee
that there aren’t other opens on the file that won’t reverse that
decision (note: it doesn’t require another filter below you - even an
application can do this. Anyone building test suites that doesn’t test
this scenario is slacking.)
So every time I look at this I conclude: “cannot 100% reliably detect a
delete from a filter.” The suggestion always arises then that “well,
just look in the directory after we close the file and see if it really
went away.” Even a modicum of analysis will make you realize that this
just won’t work reliably (ergo, someone else races in and creates the
file while you’re checking in the directory.) One of the disadvantages
of living in a highly parallel environment.
Maybe “most of the time” is good enough for your filter (“it doesn’t
crash very often!”) I find getting the cases where I know it will work
correct, that I shy away from the cases where I know it cannot always
work.
Tony
OSR