Closing down cache manager references?

Hi all:

A couple of months ago I asked some questions about how caching persistency
works, and got a lot of helpful advice. However, I’m still unclear on one
(probably basic) point.

I understand that when I call CcUninitializeCacheMap on a FileObject, that I
might not see the CLOSE Irp from the cache manager for that object right
away; I may only see it when the cache manager decides to empty it from its
cache and make it go away.

However, when a file is being deleted, how can I force the cache manager to
release its reference? I looked through the Fat sources, and the only thing
I see is that, for a file that’s to be deleted, it truncates the file and
sets the TruncateSize parameter to CcUninitializeCacheMap to point to 0.

Is this all I need to do? (I’m already calling MmFlushImageSection() to see
if it’s permissible to delete the file.)

Thanks in advance,
Curt

Closing down cache manager references?>I understand that when I call
CcUninitializeCacheMap on a FileObject, that I

might not see the CLOSE Irp from the cache manager for that object right
away;

CcUninitializeCacheMap will just schedule the cache map for deletion, not
more. It will be deleted when Cc will find it convinient to do this - surely
from the system worker thread which no locks taken.
Deleting the cache map by Cc will include ObDereferenceObject on the file
object, which in turn can (if this is the last reference) send CLOSE to the
FSD/filter.

Consider CcUninitializeCacheMap as “schedule the file to be closed, when Cc
will find it convinient to do this”.

see is that, for a file that’s to be deleted, it truncates the file and
sets the
TruncateSize parameter to CcUninitializeCacheMap to point to 0.
Is this all I need to do? (I’m already calling MmFlushImageSection() to
see

Yes, this is OK.

Max

Hi Max:

Thanks for your reply.

> Closing down cache manager references?
> I understand that when I call
> CcUninitializeCacheMap on a FileObject, that I
> might not see the CLOSE Irp from the cache manager for that
> object right
> away;

CcUninitializeCacheMap will just schedule the cache map for
deletion
, not
more. It will be deleted when Cc will find it convinient to
do this - surely
from the system worker thread which no locks taken.
Deleting the cache map by Cc will include ObDereferenceObject
on the file
object, which in turn can (if this is the last reference)
send CLOSE to the
FSD/filter.

Your explanation seems clear for file objects created for user
files/directories. But I’m having trouble with file objects I create for
directory metadata (using IoCreateStreamFileObject()).

What I’m seeing is: If there are no user references to a directory, I’m
calling CcUnitializeCacheMap() and ObDereferenceObject() on the file object
I created for it for caching. I’m seeing the final Close come through as a
result of the ObDereferenceObject(). But subsequent to this processing (and
my deleting the FCB), I’m seeing a LazyWriter callback for this directory’s
file object – which I’m sure is to flush the cache contents for the object.

How can I know absolutely when the Cache Manager is finished with a file
object? I would have expected to have seen the Lazy Writer thread do a
write first, before I saw the final Close for the file object.

Any insights are highly appreciated!

Thanks,
Curt

Curt,

All the OS does is use reference counts to track when it is the right time
to do this work; given that you are seeing an IRP_MJ_CLOSE before you see
the end of I/O to that file object, you should suspect a reference counting
problem.

You would note this faster if you used the driver verifier - it would
immediately trounce on the attempt to use the file object after it was
deleted (which is what the IRP_MJ_CLOSE indicates is happening.) But it
would leave the reference counting problem - which is caused because you
have called ObDereferenceObject when you don’t have a separate reference to
release!

Thus, I believe your problem is because you have the extraneous call to
ObDereferenceObject. Why do you believe that you have to dereference it?

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com http:

-----Original Message-----
From: xxxxx@omnishift.com [mailto:xxxxx@omnishift.com]
Sent: Sunday, October 08, 2000 2:16 PM
To: File Systems Developers
Subject: [ntfsd] Re: Closing down cache manager references?

Hi Max:

Thanks for your reply.

> > Closing down cache manager references?
> > I understand that when I call
> > CcUninitializeCacheMap on a FileObject, that I
> > might not see the CLOSE Irp from the cache manager for that
> > object right
> > away;
>
> CcUninitializeCacheMap will just schedule the cache map for
> deletion
, not
> more. It will be deleted when Cc will find it convinient to
> do this - surely
> from the system worker thread which no locks taken.
> Deleting the cache map by Cc will include ObDereferenceObject
> on the file
> object, which in turn can (if this is the last reference)
> send CLOSE to the
> FSD/filter.

Your explanation seems clear for file objects created for user
files/directories. But I’m having trouble with file objects I create for
directory metadata (using IoCreateStreamFileObject()).

What I’m seeing is: If there are no user references to a directory, I’m
calling CcUnitializeCacheMap() and ObDereferenceObject() on the file object
I created for it for caching. I’m seeing the final Close come through as a
result of the ObDereferenceObject(). But subsequent to this processing (and
my deleting the FCB), I’m seeing a LazyWriter callback for this directory’s
file object – which I’m sure is to flush the cache contents for the object.

How can I know absolutely when the Cache Manager is finished with a file
object? I would have expected to have seen the Lazy Writer thread do a
write first, before I saw the final Close for the file object.

Any insights are highly appreciated!

Thanks,
Curt</http:>

I don’t believe the problem is an extraneous ObDereferenceObject. In fact,
ObDereferenceObject is the correct method of indicating you’re finished with
a file object created by IoCreateStreamFileObject. The problem Curt is
seeing may be related to a far deeper problem that I have observed with NT
filesystem filter drivers.

Specifically, I’ve observed filter drivers receiving paging IO write IRP’s
shortly after receiving the IRP_MJ_CLOSE. There is a race in the memory
manager between the close IRP and the paging IO. The race does not cause a
problem for filesystems because the filesystem gets called by the memory
manager to acquire resources before the paging IO is issued. Consequently,
when the IRP_MJ_CLOSE reaches the file system, it is blocked until the
paging IO resources are released. Everything works fine from the
perspective of a filesystem as long as the filesystem adheres to a resource
acquisition strategy similar to the stock filesystems. However, from the
perspective of a filter, paging IO writes may be received by the filter
shortly after the filter receives the IRP_MJ_CLOSE.

Let me know if this is in fact the problem you are seeing.

Regards,

Rob

-----Original Message-----
From: Tony Mason [mailto:xxxxx@osr.com]
Sent: Sunday, October 08, 2000 9:06 PM
To: File Systems Developers
Subject: [ntfsd] Re: Closing down cache manager references?

Curt,

All the OS does is use reference counts to track when it is
the right time
to do this work; given that you are seeing an IRP_MJ_CLOSE
before you see
the end of I/O to that file object, you should suspect a
reference counting
problem.

You would note this faster if you used the driver verifier - it would
immediately trounce on the attempt to use the file object after it was
deleted (which is what the IRP_MJ_CLOSE indicates is
happening.) But it
would leave the reference counting problem - which is caused
because you
have called ObDereferenceObject when you don’t have a
separate reference to
release!

Thus, I believe your problem is because you have the
extraneous call to
ObDereferenceObject. Why do you believe that you have to
dereference it?

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com http:
>
>
> -----Original Message-----
> From: xxxxx@omnishift.com
> [mailto:xxxxx@omnishift.com]
> Sent: Sunday, October 08, 2000 2:16 PM
> To: File Systems Developers
> Subject: [ntfsd] Re: Closing down cache manager references?
>
>
>
> Hi Max:
>
> Thanks for your reply.
>
> > > Closing down cache manager references?
> > > I understand that when I call
> > > CcUninitializeCacheMap on a FileObject, that I
> > > might not see the CLOSE Irp from the cache manager for that
> > > object right
> > > away;
> >
> > CcUninitializeCacheMap will just schedule the cache map for
> > deletion
, not
> > more. It will be deleted when Cc will find it convinient to
> > do this - surely
> > from the system worker thread which no locks taken.
> > Deleting the cache map by Cc will include ObDereferenceObject
> > on the file
> > object, which in turn can (if this is the last reference)
> > send CLOSE to the
> > FSD/filter.
>
> Your explanation seems clear for file objects created for user
> files/directories. But I’m having trouble with file objects
> I create for
> directory metadata (using IoCreateStreamFileObject()).
>
> What I’m seeing is: If there are no user references to a
> directory, I’m
> calling CcUnitializeCacheMap() and ObDereferenceObject() on
> the file object
> I created for it for caching. I’m seeing the final Close
> come through as a
> result of the ObDereferenceObject(). But subsequent to this
> processing (and
> my deleting the FCB), I’m seeing a LazyWriter callback for
> this directory’s
> file object – which I’m sure is to flush the cache contents
> for the object.
>
> How can I know absolutely when the Cache Manager is
> finished with a file
> object? I would have expected to have seen the Lazy Writer
> thread do a
> write first, before I saw the final Close for the file object.
>
> Any insights are highly appreciated!
>
> Thanks,
> Curt
>
>
> —
> You are currently subscribed to ntfsd as: xxxxx@nsisw.com
> To unsubscribe send a blank email to $subst(‘Email.Unsub’)
></http:>

Hi Tony:

All the OS does is use reference counts to track when it is
the right time
to do this work; given that you are seeing an IRP_MJ_CLOSE
before you see
the end of I/O to that file object, you should suspect a
reference counting
problem.

You would note this faster if you used the driver verifier - it would
immediately trounce on the attempt to use the file object after it was
deleted (which is what the IRP_MJ_CLOSE indicates is
happening.) But it
would leave the reference counting problem - which is caused
because you
have called ObDereferenceObject when you don’t have a
separate reference to
release!

Thus, I believe your problem is because you have the
extraneous call to
ObDereferenceObject. Why do you believe that you have to
dereference it?

Rajeev Nagar’s book (p. 508) talks about using a stream file object for
caching directory metadata, at the end of which he says “The stream file
object can be closed by simply performing an ObDereferenceObject() on the
file object structure.” I was, perhaps naively, following that.

So here’s my quandry: I’m calling IoCreateStreamFileObject() to get a file
object; I’m doing caching through it for directory metadata; the ref count
I’ve got on my FCB for the directory solely reflects user Creates and
Closes; and now I need to delete the directory.

I’ve gotten the last user Close; my ref count is 0. Yet I haven’t yet
gotten a Close for the stream file object I’ve created. How then:

a) can I know what the I/O Manager’s ref count is on this file object?
b) can I force this ref count to be 0 without calling ObDereferenceObject()?
c) can I get any writes for this before I get the final Close?

Thanks,
Curt

Hi Rob:

Specifically, I’ve observed filter drivers receiving paging
IO write IRP’s
shortly after receiving the IRP_MJ_CLOSE. There is a race in
the memory
manager between the close IRP and the paging IO. The race
does not cause a
problem for filesystems because the filesystem gets called by
the memory
manager to acquire resources before the paging IO is issued.
Consequently,
when the IRP_MJ_CLOSE reaches the file system, it is blocked until the
paging IO resources are released. Everything works fine from the
perspective of a filesystem as long as the filesystem adheres
to a resource
acquisition strategy similar to the stock filesystems.

In my case (FSD, not a filter driver), I’m definitely seeing the Close for
the stream file object before I see the lazy writer callback to acquire the
resource.

Now, I’m following the example of Fastfat and Cdfs, and I’m not even
initializing the callbacks for

AcquireFileForNtCreateSection
AcquireForModWrite
AcquireForCcFlush

It’s unclear to me what the implications of this are.

Thanks,
Curt

Rob,

Interesting. If this is, in fact, the case, it would appear that there is a
reference counting bug still present, since there should be a reference from
the section to the file object. If the file object’s reference count is
zero, the MM should not (obviously) be using it. If it is, in fact, doing
so (and I have no reason to doubt your analysis here,) it would be a bug.

Curt’s description of not implementing the alternate locking calls may, in
fact, be indicative of why it doesn’t work for his FSD - he may be relying
upon the “standard” FSD locking mechanism (without realizing it) but he is
using some alternative mechanism. Hence, the VM system acquires the locks
in his FCB but when the calls arrive in his FSD he uses some OTHER locking
model so it doesn’t block the IRP_MJ_CLOSE call - thus, he sees the close
before the paging I/O.

Curt, it would be interesting if, when you see the paging I/O IRPs in
question, you could actually grab a stack trace - it would be helpful in
better understanding the calling sequence from MM (ideally, we’d want the
CLOSE sequence just before it as well, but I suspect that might be asking
too much.)

I’ll take Rob’s word on dereferencing the file object returned from
IoCreateStreamFileObject (I’ve never used file objects created by this call)
so the obvious suggestion would be to at least ensure you FLUSH the file
object prior to dereferencing it. In that case, there would not be any
dirty data and hence there would not be any need for the VM system to WRITE
data back to your FSD.

That doesn’t deal with the more troubling fundamental question, namely: why
would the VM system be using a file object to which it does not have an
active reference? If it DOES have an active reference, why is the I/O
Manager issuing an IRP_MJ_CLOSE (which indicates a zero reference count.)

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com

-----Original Message-----
From: Rob Fuller [mailto:xxxxx@NSISW.COM]
Sent: Monday, October 09, 2000 11:20 AM
To: File Systems Developers
Subject: [ntfsd] Re: Closing down cache manager references?

I don’t believe the problem is an extraneous ObDereferenceObject. In fact,
ObDereferenceObject is the correct method of indicating you’re finished with
a file object created by IoCreateStreamFileObject. The problem Curt is
seeing may be related to a far deeper problem that I have observed with NT
filesystem filter drivers.

Specifically, I’ve observed filter drivers receiving paging IO write IRP’s
shortly after receiving the IRP_MJ_CLOSE. There is a race in the memory
manager between the close IRP and the paging IO. The race does not cause a
problem for filesystems because the filesystem gets called by the memory
manager to acquire resources before the paging IO is issued. Consequently,
when the IRP_MJ_CLOSE reaches the file system, it is blocked until the
paging IO resources are released. Everything works fine from the
perspective of a filesystem as long as the filesystem adheres to a resource
acquisition strategy similar to the stock filesystems. However, from the
perspective of a filter, paging IO writes may be received by the filter
shortly after the filter receives the IRP_MJ_CLOSE.

Let me know if this is in fact the problem you are seeing.

Regards,

Rob

-----Original Message-----
From: Tony Mason [mailto:xxxxx@osr.com]
Sent: Sunday, October 08, 2000 9:06 PM
To: File Systems Developers
Subject: [ntfsd] Re: Closing down cache manager references?

Curt,

All the OS does is use reference counts to track when it is
the right time
to do this work; given that you are seeing an IRP_MJ_CLOSE
before you see
the end of I/O to that file object, you should suspect a
reference counting
problem.

You would note this faster if you used the driver verifier - it would
immediately trounce on the attempt to use the file object after it was
deleted (which is what the IRP_MJ_CLOSE indicates is
happening.) But it
would leave the reference counting problem - which is caused
because you
have called ObDereferenceObject when you don’t have a
separate reference to
release!

Thus, I believe your problem is because you have the
extraneous call to
ObDereferenceObject. Why do you believe that you have to
dereference it?

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com http:
>
>
> -----Original Message-----
> From: xxxxx@omnishift.com
> [mailto:xxxxx@omnishift.com]
> Sent: Sunday, October 08, 2000 2:16 PM
> To: File Systems Developers
> Subject: [ntfsd] Re: Closing down cache manager references?
>
>
>
> Hi Max:
>
> Thanks for your reply.
>
> > > Closing down cache manager references?
> > > I understand that when I call
> > > CcUninitializeCacheMap on a FileObject, that I
> > > might not see the CLOSE Irp from the cache manager for that
> > > object right
> > > away;
> >
> > CcUninitializeCacheMap will just schedule the cache map for
> > deletion
, not
> > more. It will be deleted when Cc will find it convinient to
> > do this - surely
> > from the system worker thread which no locks taken.
> > Deleting the cache map by Cc will include ObDereferenceObject
> > on the file
> > object, which in turn can (if this is the last reference)
> > send CLOSE to the
> > FSD/filter.
>
> Your explanation seems clear for file objects created for user
> files/directories. But I’m having trouble with file objects
> I create for
> directory metadata (using IoCreateStreamFileObject()).
>
> What I’m seeing is: If there are no user references to a
> directory, I’m
> calling CcUnitializeCacheMap() and ObDereferenceObject() on
> the file object
> I created for it for caching. I’m seeing the final Close
> come through as a
> result of the ObDereferenceObject(). But subsequent to this
> processing (and
> my deleting the FCB), I’m seeing a LazyWriter callback for
> this directory’s
> file object – which I’m sure is to flush the cache contents
> for the object.
>
> How can I know absolutely when the Cache Manager is
> finished with a file
> object? I would have expected to have seen the Lazy Writer
> thread do a
> write first, before I saw the final Close for the file object.
>
> Any insights are highly appreciated!
>
> Thanks,
> Curt
>
>
> —
> You are currently subscribed to ntfsd as: xxxxx@nsisw.com
> To unsubscribe send a blank email to $subst(‘Email.Unsub’)
>


You are currently subscribed to ntfsd as: xxxxx@osr.com
To unsubscribe send a blank email to $subst(‘Email.Unsub’)</http:>

RE: [ntfsd] Re: Closing down cache manager references?>initializing the
callbacks for

AcquireFileForNtCreateSection
AcquireForModWrite
AcquireForCcFlush
It’s unclear to me what the implications of this are.
Thanks,

FSRTL will use some default logic instead.

Max

Hi Tony:

Curt’s description of not implementing the alternate locking
calls may, in
fact, be indicative of why it doesn’t work for his FSD - he
may be relying
upon the “standard” FSD locking mechanism (without realizing
it) but he is
using some alternative mechanism. Hence, the VM system
acquires the locks
in his FCB but when the calls arrive in his FSD he uses some
OTHER locking
model so it doesn’t block the IRP_MJ_CLOSE call - thus, he
sees the close
before the paging I/O.

Curt, it would be interesting if, when you see the paging I/O IRPs in
question, you could actually grab a stack trace - it would be
helpful in
better understanding the calling sequence from MM (ideally,
we’d want the
CLOSE sequence just before it as well, but I suspect that
might be asking
too much.)

I’ll take Rob’s word on dereferencing the file object returned from
IoCreateStreamFileObject (I’ve never used file objects
created by this call)
so the obvious suggestion would be to at least ensure you
FLUSH the file
object prior to dereferencing it. In that case, there would
not be any
dirty data and hence there would not be any need for the VM
system to WRITE
data back to your FSD.

Well, this sequence seems to work (remember, this is only for a directory
that’s being deleted):

CcPurgeCacheSection(FileObj, NULL, 0, TRUE);
CcUninitializeCacheMap(FileObj, TruncateSize, NULL);

(where TruncateSize is 0)

ObDereferenceObject(FileObj);

In other words, I get no lazy writes through the FileObj after I get the
Close for it.

I guess I’m too lazy/busy to try to go back to the state I was in yesterday,
where I was calling ObDereferenceObject() without the CcPurgeCacheSection(),
and see what the stack trace is. Things are in such a state of flux that
I’m not sure I can recreate it anyway.

Thanks for your help,
Curt