Opening file through "ShadowDeviceObject"

Hello,

To open file from a filter driver, I am calling ZwCreateFile using
“ShadowDeviceObject”, as described in OSR’s IFS FAQ, to avoid reentrancy.
This works well for local FSDs as well as for Lanman redirector FSD. But
when closing the file handle using ZwClose, the system is sending
IRP_MJ_CLEANUP and IRP_MJ_CLOSE to top level filter driver attached to
local FSDs. That means filter driver(s) layered above us is seeing these
cleanup and close IRPs for the file, for which they have not seen any
IRP_MJ_CREATE IRP!. In contrast, for Lanman redirector, the system is
sending cleanup and close IRPs to the “ShadowDeviceObject” directly (as I
expected).

Any idea on:

  1. Why this different behavior for local FSDs and network re-director?

  2. Is there some way to direct the I/O manager to send IRP_MJ_CLEANUP and
    IRP_MJ_CLOSE to our “ShadowDeviceObject”, which passes down IRPs to
    attached local FSDs, when ZwClose is called for the opened file handle?

Thanks in advance.

Regards,
Robin

Robin,

This would suggest you’ve implemented something incorrectly, since there’s
no way for the I/O Manager to associate a “shadow device object” with the
primary device stack if properly implemented (and hence, no way for it to
send IRP_MJ_CLEANUP and IRP_MJ_CLOSE Irps to the “other” device stack.)

This is just fundamental to the way the I/O subsystem works - it calls
IoGetRelatedDeviceObject with the file object. If you call ZwCreateFile
using your shadow device object’s name, the file object in question would
point to your shadow device object and IoGetRelatedDeviceObject would return
your shadow device object.

The fact that you are seeing IRP_MJ_CLEANUP and IRP_MJ_CLOSE coming down the
“normal” stack would suggest that the call to IoGetRelatedDeviceObject is
returning a device object for the “normal” stack.

I can think of one scenario in which you could achieve this - if you are
ATTACHING the shadow device object to the “normal” device stack. If you did
this, it would cause exactly this type of problem. Of course, that is an
error in implementation - the whole point of this suggested technique was to
NOT attach to the “normal” device stack, and thereby avoid the type of
reentrancy you are now observing.

You can check this by taking the handle you get back when opening the file
on your shadow device (returned to you from ZwCreateFile or IoCreateFile),
converting it to a file object (ObReferenceObjectByHandle), calling
IoGetRelatedDeviceObject on the file object, releasing your reference on the
file object (ObDereferenceObject) and verifying that the device object
returned from that is your shadow device object.

As a side comment, a file system filter driver must be able to handle seeing
IRP_MJ_CLEANUP and IRP_MJ_CLOSE for file objects for which they have not
seen an IRP_MJ_CREATE in any case (see IoCreateStreamFileObject and
IoCreateStreamFileObjectLite for calls that create FileObjects within a file
system without sending an IRP_MJ_CREATE to filters layered above them).

Regards,

Tony

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: Robin [mailto:xxxxx@xpsonline.com]
Sent: Saturday, March 16, 2002 3:17 AM
To: File Systems Developers
Subject: [ntfsd] Opening file through “ShadowDeviceObject”

Hello,

To open file from a filter driver, I am calling ZwCreateFile using
“ShadowDeviceObject”, as described in OSR’s IFS FAQ, to avoid reentrancy.
This works well for local FSDs as well as for Lanman redirector FSD. But
when closing the file handle using ZwClose, the system is sending
IRP_MJ_CLEANUP and IRP_MJ_CLOSE to top level filter driver attached to
local FSDs. That means filter driver(s) layered above us is seeing these
cleanup and close IRPs for the file, for which they have not seen any
IRP_MJ_CREATE IRP!. In contrast, for Lanman redirector, the system is
sending cleanup and close IRPs to the “ShadowDeviceObject” directly (as I
expected).

Any idea on:

  1. Why this different behavior for local FSDs and network re-director?

  2. Is there some way to direct the I/O manager to send IRP_MJ_CLEANUP and
    IRP_MJ_CLOSE to our “ShadowDeviceObject”, which passes down IRPs to
    attached local FSDs, when ZwClose is called for the opened file handle?

Thanks in advance.

Regards,
Robin


You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to %%email.unsub%%

At 05:01 PM 3/16/02, Tony Mason wrote:
>Robin,
>
>This would suggest you’ve implemented something incorrectly, since there’s
>no way for the I/O Manager to associate a “shadow device object” with the
>primary device stack if properly implemented (and hence, no way for it to
>send IRP_MJ_CLEANUP and IRP_MJ_CLOSE Irps to the “other” device stack.)

Sorry, I beg to differ. Please see below.

>This is just fundamental to the way the I/O subsystem works - it calls
>IoGetRelatedDeviceObject with the file object. If you call ZwCreateFile
>using your shadow device object’s name, the file object in question would
>point to your shadow device object and IoGetRelatedDeviceObject would return
>your shadow device object.

Not always, as in the case of files/folders opened on local devices where
FileObject->Vpb is not NULL.

>The fact that you are seeing IRP_MJ_CLEANUP and IRP_MJ_CLOSE coming down the
>“normal” stack would suggest that the call to IoGetRelatedDeviceObject is
>returning a device object for the “normal” stack.

This is exactly what is happening. The IoGetRelatedDeviceObject works hard
to return the top attached device object on the “normal” filter stack. It
works much like:

PDEVICE_OBJECT
IoGetRelatedDeviceObject (
IN PFILE_OBJECT FileObject
)
{
PDEVICE_OBJECT pDevObj;

if (FileObject->Vpb == NULL) // In case of network FSD
{
pDevObj = FileObject->DeviceObject; // This is our shadow
device object

if (LOBYTE(FileObject->Flags) != 8 && pDevObj->Vpb != NULL)
pDevObj = pDevObj->Vpb->DeviceObject;
}
else // In
case of local FSD
pDevObj = FileObject->Vpb->DeviceObject;

if (pDevObj->AttachedDevice != NULL) // It also checks some other
things here
pDevObj = IoGetAttachedDevice( pDevObj);

return pDevObj;
}

PDEVICE_OBJECT
IoGetAttachedDevice(
IN PDEVICE_OBJECT DeviceObject
)
{
while (DeviceObject != NULL)
DeviceObject = DeviceObject->AttachedDevice;

return DeviceObject;
}

[Note: I just typed the above code on email. It is neither complete nor
accurate.]

So, IoGetRelatedDeviceObject() returns the top attached device object on
the “normal” filter device stack from FileObject->Vpb. For network FSD,
FileObject->Vpb is NULL, and thus IoGetRelatedDeviceObject() returns the
shadow device object.

It seems this is the reason I/O manager is sending cleanup and close IRPs
for the local FSDs to the top driver in the normal filter stack, while the
same for network FSD to our shadow device object. I think it makes sense
for the I/O manager to do so.

>I can think of one scenario in which you could achieve this - if you are
>ATTACHING the shadow device object to the “normal” device stack. If you did
>this, it would cause exactly this type of problem. Of course, that is an
>error in implementation - the whole point of this suggested technique was to
>NOT attach to the “normal” device stack, and thereby avoid the type of
>reentrancy you are now observing.

I have NOT attached the shadow device object to any other device object. It
is just holding a pointer to our filter device object that is attached to
the next filter device object on the device stack, and just passes down the
IRPs to that next device object on the filter device stack.

>You can check this by taking the handle you get back when opening the file
>on your shadow device (returned to you from ZwCreateFile or IoCreateFile),
>converting it to a file object (ObReferenceObjectByHandle), calling
>IoGetRelatedDeviceObject on the file object, releasing your reference on the
>file object (ObDereferenceObject) and verifying that the device object
>returned from that is your shadow device object.

I have checked it. It dos NOT always returns my shadow device object as I
pointed out above.

>As a side comment, a file system filter driver must be able to handle seeing
>IRP_MJ_CLEANUP and IRP_MJ_CLOSE for file objects for which they have not
>seen an IRP_MJ_CREATE in any case (see IoCreateStreamFileObject and
>IoCreateStreamFileObjectLite for calls that create FileObjects within a file
>system without sending an IRP_MJ_CREATE to filters layered above them).

I am aware that it is not supposed to make any problem for the filter
drivers layered above us. But I would not like FileMon (or similar utility)
to log file names opened by our filter driver :slight_smile:

I posted the original message in case there is some way to coerce I/O
manager to send the cleanup and close IRPs to our shadow device object in
case of local FSDS too. One option may be setting the Vpb pointer in
FileObject to NULL before calling ZwClose and then restoring the its value
when the IRP will pass through our shadow device object handler. But it
might have some other adverse consequences. Any idea?

Any other comments?

BTW, thanks to Tony Mason and OSR for the IFS FAQ.

Regards,
Robin

>-----Original Message-----
>From: Robin [mailto:xxxxx@xpsonline.com]
>Sent: Saturday, March 16, 2002 3:17 AM
>To: File Systems Developers
>Subject: [ntfsd] Opening file through “ShadowDeviceObject”
>
>Hello,
>
>To open file from a filter driver, I am calling ZwCreateFile using
>“ShadowDeviceObject”, as described in OSR’s IFS FAQ, to avoid reentrancy.
>This works well for local FSDs as well as for Lanman redirector FSD. But
>when closing the file handle using ZwClose, the system is sending
>IRP_MJ_CLEANUP and IRP_MJ_CLOSE to top level filter driver attached to
>local FSDs. That means filter driver(s) layered above us is seeing these
>cleanup and close IRPs for the file, for which they have not seen any
>IRP_MJ_CREATE IRP!. In contrast, for Lanman redirector, the system is
>sending cleanup and close IRPs to the “ShadowDeviceObject” directly (as I
>expected).
>
>Any idea on:
>
>1. Why this different behavior for local FSDs and network re-director?
>
>2. Is there some way to direct the I/O manager to send IRP_MJ_CLEANUP and
>IRP_MJ_CLOSE to our “ShadowDeviceObject”, which passes down IRPs to
>attached local FSDs, when ZwClose is called for the opened file handle?
>
>Thanks in advance.
>
>Regards,
>Robin

Robin,

Your description of what you are observing still makes me believe that there
is an error in implementation here.

The FileObject that is sent to your “shadow device object” would not have a
Vpb, because your shadow device is not associated with a physical volume.
Your description of what you observe would suggest that you are not opening
the file against your shadow file object.

To use the “shadow device object” approach for re-opening a file, you
specify the name of your shadow device object along with the balance of the
name to be parsed by the underlying driver in a call to ZwCreateFile or
IoCreateFile. When the I/O Manager is called (IopParseDevice) it will get a
copy of your shadow device object and the balance of your name. Since
another filter (like FileMon) would not layer on top of your shadow device
object, you wouldn’t have the issues of re-entrancy.

There is no way, in this scheme, for the I/O Manager to associate the file
object with a Vpb - your shadow device object does not have one, so the file
object should not have one (much in the same way it associates the network
file system’s device object, even though it does not have a Vpb).

So, my question to you is how does the I/O Manager associate the Vpb for a
different device with the file object created against your shadow device?

Regards,

Tony

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: Robin [mailto:xxxxx@xpsonline.com]
Sent: Saturday, March 16, 2002 10:57 AM
To: File Systems Developers
Subject: [ntfsd] RE: Opening file through “ShadowDeviceObject”

At 05:01 PM 3/16/02, Tony Mason wrote:
>Robin,
>
>This would suggest you’ve implemented something incorrectly, since there’s
>no way for the I/O Manager to associate a “shadow device object” with the
>primary device stack if properly implemented (and hence, no way for it to
>send IRP_MJ_CLEANUP and IRP_MJ_CLOSE Irps to the “other” device stack.)

Sorry, I beg to differ. Please see below.

>This is just fundamental to the way the I/O subsystem works - it calls
>IoGetRelatedDeviceObject with the file object. If you call ZwCreateFile
>using your shadow device object’s name, the file object in question would
>point to your shadow device object and IoGetRelatedDeviceObject would
return
>your shadow device object.

Not always, as in the case of files/folders opened on local devices where
FileObject->Vpb is not NULL.

>The fact that you are seeing IRP_MJ_CLEANUP and IRP_MJ_CLOSE coming down
the
>“normal” stack would suggest that the call to IoGetRelatedDeviceObject is
>returning a device object for the “normal” stack.

This is exactly what is happening. The IoGetRelatedDeviceObject works hard
to return the top attached device object on the “normal” filter stack. It
works much like:

PDEVICE_OBJECT
IoGetRelatedDeviceObject (
IN PFILE_OBJECT FileObject
)
{
PDEVICE_OBJECT pDevObj;

if (FileObject->Vpb == NULL) // In case of network FSD
{
pDevObj = FileObject->DeviceObject; // This is our shadow
device object

if (LOBYTE(FileObject->Flags) != 8 && pDevObj->Vpb != NULL)
pDevObj = pDevObj->Vpb->DeviceObject;
}
else // In
case of local FSD
pDevObj = FileObject->Vpb->DeviceObject;

if (pDevObj->AttachedDevice != NULL) // It also checks some other
things here
pDevObj = IoGetAttachedDevice( pDevObj);

return pDevObj;
}

PDEVICE_OBJECT
IoGetAttachedDevice(
IN PDEVICE_OBJECT DeviceObject
)
{
while (DeviceObject != NULL)
DeviceObject = DeviceObject->AttachedDevice;

return DeviceObject;
}

[Note: I just typed the above code on email. It is neither complete nor
accurate.]

So, IoGetRelatedDeviceObject() returns the top attached device object on
the “normal” filter device stack from FileObject->Vpb. For network FSD,
FileObject->Vpb is NULL, and thus IoGetRelatedDeviceObject() returns the
shadow device object.

It seems this is the reason I/O manager is sending cleanup and close IRPs
for the local FSDs to the top driver in the normal filter stack, while the
same for network FSD to our shadow device object. I think it makes sense
for the I/O manager to do so.

>I can think of one scenario in which you could achieve this - if you are
>ATTACHING the shadow device object to the “normal” device stack. If you
did
>this, it would cause exactly this type of problem. Of course, that is an
>error in implementation - the whole point of this suggested technique was
to
>NOT attach to the “normal” device stack, and thereby avoid the type of
>reentrancy you are now observing.

I have NOT attached the shadow device object to any other device object. It
is just holding a pointer to our filter device object that is attached to
the next filter device object on the device stack, and just passes down the
IRPs to that next device object on the filter device stack.

>You can check this by taking the handle you get back when opening the file
>on your shadow device (returned to you from ZwCreateFile or IoCreateFile),
>converting it to a file object (ObReferenceObjectByHandle), calling
>IoGetRelatedDeviceObject on the file object, releasing your reference on
the
>file object (ObDereferenceObject) and verifying that the device object
>returned from that is your shadow device object.

I have checked it. It dos NOT always returns my shadow device object as I
pointed out above.

>As a side comment, a file system filter driver must be able to handle
seeing
>IRP_MJ_CLEANUP and IRP_MJ_CLOSE for file objects for which they have not
>seen an IRP_MJ_CREATE in any case (see IoCreateStreamFileObject and
>IoCreateStreamFileObjectLite for calls that create FileObjects within a
file
>system without sending an IRP_MJ_CREATE to filters layered above them).

I am aware that it is not supposed to make any problem for the filter
drivers layered above us. But I would not like FileMon (or similar utility)
to log file names opened by our filter driver :slight_smile:

I posted the original message in case there is some way to coerce I/O
manager to send the cleanup and close IRPs to our shadow device object in
case of local FSDS too. One option may be setting the Vpb pointer in
FileObject to NULL before calling ZwClose and then restoring the its value
when the IRP will pass through our shadow device object handler. But it
might have some other adverse consequences. Any idea?

Any other comments?

BTW, thanks to Tony Mason and OSR for the IFS FAQ.

Regards,
Robin

>-----Original Message-----
>From: Robin [mailto:xxxxx@xpsonline.com]
>Sent: Saturday, March 16, 2002 3:17 AM
>To: File Systems Developers
>Subject: [ntfsd] Opening file through “ShadowDeviceObject”
>
>Hello,
>
>To open file from a filter driver, I am calling ZwCreateFile using
>“ShadowDeviceObject”, as described in OSR’s IFS FAQ, to avoid reentrancy.
>This works well for local FSDs as well as for Lanman redirector FSD. But
>when closing the file handle using ZwClose, the system is sending
>IRP_MJ_CLEANUP and IRP_MJ_CLOSE to top level filter driver attached to
>local FSDs. That means filter driver(s) layered above us is seeing these
>cleanup and close IRPs for the file, for which they have not seen any
>IRP_MJ_CREATE IRP!. In contrast, for Lanman redirector, the system is
>sending cleanup and close IRPs to the “ShadowDeviceObject” directly (as I
>expected).
>
>Any idea on:
>
>1. Why this different behavior for local FSDs and network re-director?
>
>2. Is there some way to direct the I/O manager to send IRP_MJ_CLEANUP and
>IRP_MJ_CLOSE to our “ShadowDeviceObject”, which passes down IRPs to
>attached local FSDs, when ZwClose is called for the opened file handle?
>
>Thanks in advance.
>
>Regards,
>Robin


You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to %%email.unsub%%

At 11:40 PM 3/16/02, Tony Mason wrote:

Robin,

Your description of what you are observing still makes me believe that there
is an error in implementation here.

The FileObject that is sent to your “shadow device object” would not have a
Vpb, because your shadow device is not associated with a physical volume.

That’s right.

Your description of what you observe would suggest that you are not opening
the file against your shadow file object.

No, that is not the case as I stated earlier. Here is how I am doing it.

Say, to open a file named “\Windows\SomeFileName.txt”, I am prepending the
shadow device object name like
“\Device\ShadowVolumeXX\Windows\SomeFileName.txt” (where XX is some unique
number). In case of network re-director this will be something like
“\Device\ShadowVolumeXX\CompName\ShareName\SomeFileName.txt”. Then I am
passing this file name to ZwCreateFile as shown below.

IO_STATUS_BLOCK ioStatus;
OBJECT_ATTRIBUTES objAttr;
NTSTATUS ntStatus;

InitializeObjectAttributes( &objAttr, FileName, OBJ_CASE_INSENSITIVE, NULL,
NULL);
ntStatus = ZwCreateFile( &hFile, Access, &objAttr, &ioStatus, . . .);

if (NT_SUCCESS( ntStatus))
{
ntStatus = ObReferenceObjectByHandle( hFile, Access, *IoFileObjectType,
KernelMode, (PVOID*)&fileObj,
NULL);
}
return ntStatus;

When the above function returns I see that fileObj->Vpb is set for the
Fastfat and Ntfs volumes. For Lanman redirector it is always set to NULL as
expected.

When closing the file, I am using the following code.

ObDereferenceObject( fileObj);
status = ZwClose( hFile);

You can see there is not much scope to make errors here, and I/O Manager is
correctly sending the create IRP to my shadow device object (no reentrancy
here). In shadow device object handler I am just skipping the stack
location and passing the Irp down to the next filter device object (I have
a separate filter device object attached to that device object). The shadow
device object is a simple device object (NOT attached to anything)
containing a pointer to our filter device object, which in turn contains a
pointer to next filter device object on the stack.

To use the “shadow device object” approach for re-opening a file, you
specify the name of your shadow device object along with the balance of the
name to be parsed by the underlying driver in a call to ZwCreateFile or
IoCreateFile. When the I/O Manager is called (IopParseDevice) it will get a
copy of your shadow device object and the balance of your name. Since
another filter (like FileMon) would not layer on top of your shadow device
object, you wouldn’t have the issues of re-entrancy.

I understand this.

There is no way, in this scheme, for the I/O Manager to associate the file
object with a Vpb - your shadow device object does not have one, so the file
object should not have one (much in the same way it associates the network
file system’s device object, even though it does not have a Vpb).

So, my question to you is how does the I/O Manager associate the Vpb for a
different device with the file object created against your shadow device?

The I/O Manager is NOT associating the Vpb with the FileObject as you
explained above. In fact when the create Irp comes (as a result of calling
ZwCreateFile above) to our shadow device object handler, FileObject->Vpb is
always set to NULL for both the local as well as network FSDs. To see who
is setting the Vpb pointer, I set a memory write break-point on
FileObject->Vpb and found that following codes in Fastfat and Ntfs are
setting this value.

NtfsSetFileObject+0019
FatSetFileObject+001D

This in on Windows XP Pro retail version (you might have access to the
source code to verify it).

I don’t know why both Fastfat and Ntfs is setting the Vpb pointer in
FileObject, but there must be some good reason to do that.

Regards,
Robin

Robin,

I agree with your analysis - it is FAT (or NTFS) that is initializing the
Vpb, not the I/O manager. Since LanManager would not do this (it does not
use Vpb pointers), that would explain why it works as you expected for
LanManager.

Now, I suppose the point here is that the initial issue you raised is that
the filters who observe the IRP_MJ_CLEANUP and IRP_MJ_CLOSE operation might
find this “confusing”. However, I will note:

* IoCreateStreamFileObject and IoCreateStreamFileObjectLite both create file
objects for which the filter driver does not see an IRP_MJ_CLEANUP or
IRP_MJ_CLOSE.

* IoCreateFileSpecifyDeviceObjectHint (the anti-reentrancy change in Windows
XP) has exactly the same behavior/semantics as the shadow device object
technique.

If these are not sufficient grounds, then the obviously question would be
“what are you willing to do to resolve this particular problem”. I can see
a couple of possible options - but they are JUST options and I’ve never
tried them before:

* Create stream file objects in your filter driver. Then you can use the
stream file objects when calling FAT/NTFS. The I/O Manager will know
nothing of those stream file objects and will call your FSD directly. This
is quite a bit more work for you, though.

* Change the Vpb pointer in the file object before you close the handle. As
a general rule, you should not do this. However, since you own this file
object and nobody else will be using it, you can set this to zero. Then
close the handle. The I/O Manager will pass it to your filter. You restore
the Vpb pointer and pass it to FAT/NTFS (or the filter below you). This is
NOT a general solution, since you cannot do this if anyone else is using the
file object (there are code paths in the FAT code that rely upon the
FileObject->Vpb). I’m not fond of this solution, since I don’t like
“substituting” fields in OS data structures (at least this one belongs to
the FSD…)

There are no doubt other ideas, but this should give you something to at
least consider.

Regards,

Tony

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: Robin [mailto:xxxxx@xpsonline.com]
Sent: Sunday, March 17, 2002 6:46 AM
To: File Systems Developers
Subject: [ntfsd] RE: Opening file through “ShadowDeviceObject”

At 11:40 PM 3/16/02, Tony Mason wrote:

Robin,

Your description of what you are observing still makes me believe that
there
is an error in implementation here.

The FileObject that is sent to your “shadow device object” would not have a
Vpb, because your shadow device is not associated with a physical volume.

That’s right.

Your description of what you observe would suggest that you are not opening
the file against your shadow file object.

No, that is not the case as I stated earlier. Here is how I am doing it.

Say, to open a file named “\Windows\SomeFileName.txt”, I am prepending the
shadow device object name like
“\Device\ShadowVolumeXX\Windows\SomeFileName.txt” (where XX is some unique
number). In case of network re-director this will be something like
“\Device\ShadowVolumeXX\CompName\ShareName\SomeFileName.txt”. Then I am
passing this file name to ZwCreateFile as shown below.

IO_STATUS_BLOCK ioStatus;
OBJECT_ATTRIBUTES objAttr;
NTSTATUS ntStatus;

InitializeObjectAttributes( &objAttr, FileName, OBJ_CASE_INSENSITIVE, NULL,
NULL);
ntStatus = ZwCreateFile( &hFile, Access, &objAttr, &ioStatus, . . .);

if (NT_SUCCESS( ntStatus))
{
ntStatus = ObReferenceObjectByHandle( hFile, Access, *IoFileObjectType,
KernelMode, (PVOID*)&fileObj,
NULL);
}
return ntStatus;

When the above function returns I see that fileObj->Vpb is set for the
Fastfat and Ntfs volumes. For Lanman redirector it is always set to NULL as
expected.

When closing the file, I am using the following code.

ObDereferenceObject( fileObj);
status = ZwClose( hFile);

You can see there is not much scope to make errors here, and I/O Manager is
correctly sending the create IRP to my shadow device object (no reentrancy
here). In shadow device object handler I am just skipping the stack
location and passing the Irp down to the next filter device object (I have
a separate filter device object attached to that device object). The shadow
device object is a simple device object (NOT attached to anything)
containing a pointer to our filter device object, which in turn contains a
pointer to next filter device object on the stack.

To use the “shadow device object” approach for re-opening a file, you
specify the name of your shadow device object along with the balance of the
name to be parsed by the underlying driver in a call to ZwCreateFile or
IoCreateFile. When the I/O Manager is called (IopParseDevice) it will get
a
copy of your shadow device object and the balance of your name. Since
another filter (like FileMon) would not layer on top of your shadow device
object, you wouldn’t have the issues of re-entrancy.

I understand this.

There is no way, in this scheme, for the I/O Manager to associate the file
object with a Vpb - your shadow device object does not have one, so the
file
object should not have one (much in the same way it associates the network
file system’s device object, even though it does not have a Vpb).

So, my question to you is how does the I/O Manager associate the Vpb for a
different device with the file object created against your shadow device?

The I/O Manager is NOT associating the Vpb with the FileObject as you
explained above. In fact when the create Irp comes (as a result of calling
ZwCreateFile above) to our shadow device object handler, FileObject->Vpb is
always set to NULL for both the local as well as network FSDs. To see who
is setting the Vpb pointer, I set a memory write break-point on
FileObject->Vpb and found that following codes in Fastfat and Ntfs are
setting this value.

NtfsSetFileObject+0019
FatSetFileObject+001D

This in on Windows XP Pro retail version (you might have access to the
source code to verify it).

I don’t know why both Fastfat and Ntfs is setting the Vpb pointer in
FileObject, but there must be some good reason to do that.

Regards,
Robin


You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to %%email.unsub%%