Why does blocking the Paging I/O request lead to deadlock?

>1. Why will the FSD involve Cache Manager for NON CACHED I/O?

This has been discussed here for many times, briefly - because nothing prevents it from doing this, e.g. NTFS uses cache for processing non-cached requests for some types of files.

  1. A case where the FSD can involve Cache manager in a Non Cached I/O is that the file is also cached by the chache manager. So, even if i make a non cached request, it will have to update the cache too… Right?

Hmmm, e.g. NTFS always uses Cache Manager for processing requests to compressed files, so it doesn’t update data in the cache - it uses the cache to retrieve data.

  1. What if I open my log file in DriverEntry itself and lock it. Then if i issue Non Cached I/O on it in a Paging I/O path ( to a mapped file of course ). Why will it involve the cache manager?

Nothing changes. Mapping a file will involve Memory Manager and the same problems as with the Cache Manager. Cache Manager uses mapped file, so Cache Manager is a client of Memory Manager.
But you can use non paged memory for logging or lock several pages of the memory mapping the file. For example you can create a thread which will lock memory mapping the file and unlock already used memory and lock a new chunk asynchronously acording to the system load. Also you can create a blend of using locked pages or Paged Pool( for ordinary data streams, but be careful with the amount of used memory ) or NonPagePool( for any type of file ) when it is unsafe to write in the log file and write in the log file when it is safe to do this.

  1. Can i issue a Paging I/O request to my log file and escape from all this trouble?

Yes when the TopLevelIrp is NULL. But issuing a Paging IO request is a challenge. If the TopLevelIrp is not NULL then you do not have choice - use either pages locked in advance or use PagedPool or NonPaged Pool( for any type of file ).


Slava Imameyev, xxxxx@hotmail.com

“Kernel Developer” wrote in message news:xxxxx@ntfsd…
Thanks Slava!

You said:

“because you do not know the synchronization model( lock hierarchy etc. ) used
by the underlying FSD” and NON CACHED I/O doesn’t mean that the Cache Manager
won’t be used, only PAGING I/O provides this guarantee."

1. Why will the FSD involve Cache Manager for NON CACHED I/O?
2. A case where the FSD can involve Cache manager in a Non Cached I/O is that the file is also cached by the chache manager. So, even if i make a non cached request, it will have to update the cache too… Right?
3. What if I open my log file in DriverEntry itself and lock it. Then if i issue Non Cached I/O on it in a Paging I/O path ( to a mapped file of course ). Why will it involve the cache manager?
4. Can i issue a Paging I/O request to my log file and escape from all this trouble?

Thanks!
-K. Dev.

Sorry Slava…
But i think i am not able to explain my point.

Please try to get what i am saying…

“What if I open my log file in DriverEntry itself and lock it. Then if i issue Non Cached I/O on it in a Paging I/O path ( to a mapped file of course ). Why will it involve the cache manager?”

By the above statement, i dont mean that my log file is memory mapped. I mean that the paging i/o ( paging write ) callback that i am getting in my minifilter is for a mapped file. And from there i will issue a Write to my log file which has been opened in DriverEntry itself in NON-CACHED mode and EXCLUSIVE access (so that no other application can open it). Why will this NON CACHED WRITE to my log file involve Cache manager or memory manager?

Thanks a lot.
-K. Dev.

Slava,

I am afraid you are speading the wrong information all over the place…

Let’s look at your statements, and check them against the “primary sources”, i.e. MSDN documentation:

> When you process paging IO to the mapped file, you cannot generate page
> faults on code, although you can generate them on data.

Wrong, this is a way to a deadlock. The division is not by code and data but
by those who back page(s).
Actually, the rule is - you CAN generate any page fault on any code and data
that are backed by a pagefile when you are processing request(s) to the
page(s) backed by an ordinary file(s)( data stream(s) ).

First of all, the above statement just contradicts itself. According to its first part, you will deadlock if you you page fault on data (which is backed by the paging file) while processing IO to the *MAPPED* (i.e. “ordinary” file), but its second part says exactly the opposite.

In any case, both parts are wrong. Lets’ look at what " Rules for Filters (both Legacy and Mini)" MSDN document says on the subject:

[begin quote]

Rules for Filters: Paging IO

All code paths executed while processing a Paging IO operation (IRP_PAGING_IO flag set) must not page fault . You can take page faults accessing data while processing a Paging IO operation
You can not take any page faults while processing paging IO to the Paging File…

[end quote]

As you can see, you cannot page fault on code while processing paging IO, regardless of underlying file, and you cannot page on data either when processing paging IO to the paging file itself, although you take page faults on data when processing IO to the mapped file (because it is backed by the paging file and not by the target mapped one)…

Therefore, your first statement is wrong. Now let’s look at the second one:

> Therefore, when you process paging IO to the paging file, you cannot safely
> call any function that is not callable at DPC level.

Wrong again. The function is allowed to wait for completion or
synchronization purposes, this is not possible at DISPATCH_LEVEL.

Incorrect…

Although you cannot wait *for non-zero timeout* at DISPATCH_LEVEL, you still can call KeWaitXXX at DISPATCH_LEVEL as long as timeout that you have specified is zero. Let’s look at “Constraints on Dispatch Routines” MSDN document:

[begin quote]

IRQL-Related Constraints

Dispatch routines in the paging path, such as read and write, cannot safely call any kernel-mode routines that require callers to be running at IRQL PASSIVE_LEVEL. Dispatch routines that are in the paging file I/O path cannot safely call any kernel-mode routines that require a caller to be running at IRQL < DISPATCH_LEVEL…

[end quote]

As you can see, you second statement is wrong as well…

Anton Bassov

Slava has it exactly right. The doc sections you quote are imprecise and
somewhat misleading.

Stated simply, you cannot incur a page fault while processing a page fault
to the paging file.

Any code/data distinction is simply due to the fact that paged kernel code
is backed by the pagefile, not the image.

Any IRQL requirements are guidelines based on the idea that routines that
are not callable at dispatch are probably paged.

However, it is perfectly legal to call, for example, ExAcquireResourceXXX in
the page file paging I/O path, and these routines may only be called at <=
APC_LEVEL. File systems do this.

  • Dan.

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of
xxxxx@hotmail.com
Sent: Wednesday, August 01, 2007 9:44 AM
To: Windows File Systems Devs Interest List
Subject: RE:[ntfsd] Why does blocking the Paging I/O request lead to
deadlock?

Slava,

I am afraid you are speading the wrong information all over the place…

Let’s look at your statements, and check them against the “primary sources”,
i.e. MSDN documentation:

> When you process paging IO to the mapped file, you cannot generate
> page faults on code, although you can generate them on data.

Wrong, this is a way to a deadlock. The division is not by code and
data but by those who back page(s).
Actually, the rule is - you CAN generate any page fault on any code
and data that are backed by a pagefile when you are processing
request(s) to the
page(s) backed by an ordinary file(s)( data stream(s) ).

First of all, the above statement just contradicts itself. According to its
first part, you will deadlock if you you page fault on data (which is backed
by the paging file) while processing IO to the *MAPPED* (i.e. “ordinary”
file), but its second part says exactly the opposite.

In any case, both parts are wrong. Lets’ look at what " Rules for Filters
(both Legacy and Mini)" MSDN document says on the subject:

[begin quote]

Rules for Filters: Paging IO

All code paths executed while processing a Paging IO operation
(IRP_PAGING_IO flag set) must not page fault . You can take page faults
accessing data while processing a Paging IO operation You can not take any
page faults while processing paging IO to the Paging File…

[end quote]

As you can see, you cannot page fault on code while processing paging IO,
regardless of underlying file, and you cannot page on data either when
processing paging IO to the paging file itself, although you take page
faults on data when processing IO to the mapped file (because it is backed
by the paging file and not by the target mapped one)…

Therefore, your first statement is wrong. Now let’s look at the second one:

> Therefore, when you process paging IO to the paging file, you cannot
> safely call any function that is not callable at DPC level.

Wrong again. The function is allowed to wait for completion or
synchronization purposes, this is not possible at DISPATCH_LEVEL.

Incorrect…

Although you cannot wait *for non-zero timeout* at DISPATCH_LEVEL, you
still can call KeWaitXXX at DISPATCH_LEVEL as long as timeout that you have
specified is zero. Let’s look at “Constraints on Dispatch Routines” MSDN
document:

[begin quote]

IRQL-Related Constraints

Dispatch routines in the paging path, such as read and write, cannot safely
call any kernel-mode routines that require callers to be running at IRQL
PASSIVE_LEVEL. Dispatch routines that are in the paging file I/O path cannot
safely call any kernel-mode routines that require a caller to be running at
IRQL < DISPATCH_LEVEL…

[end quote]

As you can see, you second statement is wrong as well…

Anton Bassov


NTDEV is sponsored by OSR

For our schedule debugging and file system seminars (including our new fs
mini-filter seminar) visit:
http://www.osr.com/seminars

You are currently subscribed to ntfsd as: xxxxx@privtek.com To unsubscribe
send a blank email to xxxxx@lists.osr.com

Hi…
I am getting confused here…
Ok…
can someone PLEASE give a solution to my problem:

I receive a callback for a paging i/o write on a mapped file. In the
preoperation callback itself i want to log this data to my Log file ( I have
opened this log file in DriverEntry itself for NON CACHED I/O and in
exclusive mode ). So the scene is this: Upon receiving the callback, i have
to first write the data present in the buffer to my LOG file ( using the
object that i got in driver entry ) and once that write happens
successfully, i will send the original callback data for furter processing.

A pseudo code:

DriverEntry(…)
{

OpenLogFile; // With NO INTERMEDIATE BUFFERING and EXCLUSIVE access
Get the FileObject for the above handle; // FileObject is a global
variable

}

PreWriteCallback(…)
{

// I receive a PAGING WRITE on a MAPPED FILE say foo.txt and NOT on PAGE
FILE

Allocate Callbackdata using FltAllocateCallbackData; //
FltAllocateCallbackData can be called at IRQL<=APC_LEVEL

Perform a Paging Write on LOG FILE using FltPerformAsynchronousIo( …,
CompletionRoutine,…); // FltPerformAsynchronousIo can be called at
IRQL<=APC_LEVEL if IRP_PAGING_IO flag is set

Wait for its completion using KeWaitForSingleObject( eventobj, INFINITE
TIME );

// Or instead of FltPerformAsynchronousIo and waiting, Perform a NON
CACHED WRITE to LOG File using FltPerformSynchronousIo( ); // It can be
called at IRQL<=APC_LEVEL

Return the original callback data for further processing;
}

CompletionRoutine(…) // Called after write to the log file.
{
SetEvent( eventobj );
}

Please give your opinions about it and if it is wrong, please give a
solution in which i can ensure that i am able to write to my log file before
allowing the original write to proceed.

Thanks!

  • K. Dev.

KD,

Give a solution in which i can ensure that i am able to write to my log
file before allowing the original write to proceed.

I’d be pretty stressed if I had to meet these requirements and I’d be
expecting to spend a great deal of time poring over unassembled NTFS code
and FAT sources. Even then I’d be worried. But that’s the job we have…

The problem is, you see, that the FSD has its own expectations about what
may or may not have happend up to the point that you get a paging IO. As
the conversations in this thread have shown is is very difficult to be
explicit about what any FSD might do, further observations on what one FSD
does doesn’t guarantee anything else…

To take a ridiculous assumption imagine a file system with a very simple
locking protocol. One lock, non reentrant, held exclusive for all
operations. The Cc calls in to say “Grab all the locks that you will need
because we’re about to write some pages”. So it grabs the lock and it sets
TopLevelIrp to indicate that its got the lock. In the normal state of
affairs it would then expect the same thread to issue some paging writes,
because TopLevelIrp is set it doesn’t need the lock so it does the writes.
But you come along and issue a user write, so it tries to grab this
non-reentrant lock and you deadlock.

Before anybody leaps down my throat, I know its a bad example, and it may
well break some of the rules of FSD deployment, but it is a good example of
how something which might appear innocuous isn’t.

So in your case you should start with the assumption that this is going to
be painful. For me I wouldn’t want to sign off on such constraints but it
sounds as through you are caught between a rock and a hard place.

So where I’d start would be on FAT so I could see the source associated with
the deasdlocks. That would get me some of the way, but NTFS has all sorts
of quirky locks that it takes at all sorts of quirky times (well, not that
quirky but they may seem so at first).

Then I’d remove any allocation from the path you outline. This means memory
but, much more importantly disk space. Whereas you might get lucky and get
a non-extending write past NTFS in these circumstances you’d certainly not
get an extending one.

Then I’d try putting my log on a different volume.

Then I’d either discover that it worked and test extraordinarily heavily
(and in the face of things like the disk going full, memory going short,
shutdown/boot happening around me, transactions (if appropriate)
SystemRestore and so on. I’d be expecting to spend a lot of time looking at
hing systems and working out a way to thwart NTFS again. I’d porbably still
get unfixable hangs.

So, then I’d bite the bullet and write my own silly little filesystem for
the log. Or (gulp) mine the physical placement on the disk from the
underlying FSD and write to the media, not the volume.

Good luck.

Hi Rod!

Actually i am still in the study and analysis phase of my final year project
in college. So, i cannot afford to do so much in the given time span.

Ok… So you mean, that meeting this requirement of First writing into log
file and then allowing the original paging write to proceed is almost
impossible.

So, now i have some questions:

Suppose instead of all this, i queue a work item that writes the buffer in
my log file. How much should i care about the fact that a system crash
occurs and i was not able to write something to my log file? Is it fine for
a real life product?

How is this situation handled in real life products?

Thanks!
-K. Dev.

Anton, my statements are correct and this has been confirmed by the other participant( thank you Dan ) and by my experience, but I think I owe an answer to you.

Let’s look at your statements, and check them against the “primary sources”, i.e. MSDN documentation:

>> When you process paging IO to the mapped file, you cannot generate page
>> faults on code, although you can generate them on data.

I do not know where you found this( as has been pointed out this description is “imprecise and somewhat misleading”), I suggest you to look on a source code of any FSD in DDK or WDK, as you can see the read and write dispatch functions are in a pageable secton except the code responsible for processing requests to a pagefile. So, if you sure that pagefaults are not allowed for a code section on paging IO path for mapped files( not paging files ) then it is better for you to discuss with Microsoft why they provide us with the incorrect code, may be this is a conspiracy against us, may be only agains the soviet bloke :-))) .

> Wrong, this is a way to a deadlock. The division is not by code and data but
> by those who back page(s).
> Actually, the rule is - you CAN generate any page fault on any code and data
> that are backed by a pagefile when you are processing request(s) to the
> page(s) backed by an ordinary file(s)( data stream(s) ).

First of all, the above statement just contradicts itself. According to its first part, you will deadlock if you you page fault on data (which is backed by the paging file) while processing IO to the *MAPPED* (i.e. “ordinary” file), but its second part says exactly the opposit

I do not want to give you a lesson about logic here. Actually, I can’t undersatand your inference.

So, all your subsequent disproofs of my statements are wrong because they are based on a wrong assumption, made from incomplete and inconsistent MSDN documentation, that the code section must not be paged out. But look, for example, on FASTFAT code, you can easily find that the code for processing memory mapped files for ordinary data stream is pageable. I think MSDN intentionally does not mention explicitly the case of pageable code because the authors tried to reduce errors in FSD filters by making the statement more stronger, but the design of Windows kernel allows the code section for FSDs to be pageable except the code responsible for processing requests to pagefiles.

Look on this( an excerpt from DDK\src\filesys\fastfat\wxp\write.c ) -

#ifdef ALLOC_PRAGMA

#pragma alloc_text(PAGE, FatCommonWrite)
#endif

FatCommonWrite is used for ordinary( non pagefile ) data streams.


Slava Imameyev, xxxxx@hotmail.com

Slava,

I am afraid you are taking things are bit too literally…

The doc I have quoted, apparently, makes an assumtion that the code path for a given paging IO operation may be the same for both “ordinary” and paging files (please note that we are speaking *strictly* about FS filters here). If this is the case, page fault on code while processing paging IO to the “ordinary” file will lead to the scenario that I have described - you have no chance to process either the original operation or page faults without calling a function X, but any access to this function generates a page fault, i.e you get into an infinite chain of page faults.

However, your example of FASTFAT is from the totally different field - it is understandable that the code you have mentioned has no chance to ever get executed by operations on the paging file,
so causing page faults in it is just fine…

In any case, I do admit that MSDN is somehow confusing on the issue…

For example, according to MSDN, you cannot call *any* function that is callable strictly at PASSIVE_LEVEL when processing paging IO, no matter if the target file is “ordinary” one or paging file itself. However, as long as the target file is “ordinary”, you can call still call those functions that are callable at IRQL <=APC_LEVEL. What is the difference between PASSIVE_LEVEL and APC_LEVEL, as far as page faults are concerned???

Anton Bassov

I am totally confused…

Can somebody Please reply to message 22, 25 and 27 ?

Thanks!
-K. Dev.

The file operations such as ZwCreateFile cannot run as APC_LEVEL because the
completion logic performed by the IoManager cannot be done when APCs are
disabled. Some of the code in the kernel might not be paged in for some of
the functions restricted to passive.


David J. Craig
Engineer, Sr. Staff Software Systems
Broadcom Corporation

wrote in message news:xxxxx@ntfsd…
> Slava,
>
> I am afraid you are taking things are bit too literally…
>
> The doc I have quoted, apparently, makes an assumtion that the code path
> for a given paging IO operation may be the same for both “ordinary” and
> paging files (please note that we are speaking strictly about FS filters
> here). If this is the case, page fault on code while processing paging IO
> to the “ordinary” file will lead to the scenario that I have described -
> you have no chance to process either the original operation or page faults
> without calling a function X, but any access to this function generates a
> page fault, i.e you get into an infinite chain of page faults.
>
>
> However, your example of FASTFAT is from the totally different field - it
> is understandable that the code you have mentioned has no chance to ever
> get executed by operations on the paging file,
> so causing page faults in it is just fine…
>
>
> In any case, I do admit that MSDN is somehow confusing on the issue…
>
> For example, according to MSDN, you cannot call any function that is
> callable strictly at PASSIVE_LEVEL when processing paging IO, no matter if
> the target file is “ordinary” one or paging file itself. However, as long
> as the target file is “ordinary”, you can call still call those functions
> that are callable at IRQL <=APC_LEVEL. What is the difference between
> PASSIVE_LEVEL and APC_LEVEL, as far as page faults are concerned???
>
>
> Anton Bassov
>

Anton, I see you changed your mind and admitted page faults for code section while processing paging IO for ordinary files. But I do not understand why you tried to rephrase my words in a wrong direction - from the start of the discussion I always mentioned two cases - ordinary files and page files.

The doc I have quoted, apparently, makes an assumtion that the code path for a given paging IO >operation may be the same for both “ordinary” and paging files (please note that we are speaking >*strictly* about FS filters here).

However, your example of FASTFAT is from the totally different field -

What did you try to achieve by this?( the question is rhetorical, you don’t have to answer )
I said that “ordinary” stood for any data stream except page files, and I see you understood this.
I wrote that the example was for ordinary files when I tried to persuade you that code section, which contains code responsible for processing IO to ordinary files, can be paged out.
So, please, do not try to persuade me and others that I said that this code was for any data stream.


Slava Imameyev, xxxxx@hotmail.com

Thanks Slava!
Now, I am understanding a bit. If i am correct you mean that we can generate
page fault for both CODE and DATA on a Paging I/O path for a Mapped File
because both of them are actually present on the PAGE FILE. Even the Kernel
code is backed by the PAGE FILE. Hence, if we call a function that can be
called at IRQL <= APC_LEVEL, in a Paging I/O path for a MAPPED FILE, and a
page fault occurs, the system will issue another paging i/o ( this time
targeted to the PAGE FILE ) and fetch the page.

However, even in paging I/O path targeted to a MAPPED FILE, we cannot issue
file system calls because of the TopLevelIrp component. From IRQL
perspective, it is safe to call FltAllocateCallbackData and
FltPerformSynchronousIo and FltPerformAsynchronousIo, but due to the locking
mechanism of the FSD, CM and VMM ( in short TopLevelIrp ! = NULL ), it is
not safe to call these functions. Otherwise the FSD will try to acquire the
locks again when i issue a File system request and as such it has previously
acquired those locks. So, this attempt to re-acquire acquired locks will
lead to a deadlock…
Right?

-K. Dev.

> If i am correct you mean that we can generate page fault for both CODE

and DATA on a Paging I/O path for a Mapped File because both of them are
actually present on the PAGE FILE. Even the Kernel code is backed by the
PAGE FILE. Hence, if we call a function that can be called at IRQL <=
APC_LEVEL, in a Paging I/O path for a MAPPED FILE, and a page fault
occurs, the system will issue another paging i/o ( this time targeted to
the PAGE FILE ) and fetch the page.

Yes, absolutely correct.

However, even in paging I/O path targeted to a MAPPED FILE, we cannot
issue file system calls because of the TopLevelIrp component.

Yes, especially for paging write issued by Mapped Page Writer and Cache
Manager’s lazy writer thread.

…From IRQL perspective, it is safe to call FltAllocateCallbackData and
FltPerformSynchronousIo and FltPerformAsynchronousIo, but due to the
locking mechanism of the FSD, CM and VMM ( in short TopLevelIrp ! =
NULL ), it is not safe to call these functions.

Yes, correct.

Otherwise the FSD will try to acquire the locks again when i issue a File
system request and as such it has previously acquired those locks. So,
this attempt to re-acquire acquired locks will lead to a deadlock…
Right?

Yes, correct. This is the main point in understanding of a subtle nature of
this deadlock!


Slava Imameyev, xxxxx@hotmail.com

“Kernel Developer” wrote in message
news:xxxxx@ntfsd…
> Thanks Slava!
> Now, I am understanding a bit. If i am correct you mean that we can
> generate page fault for both CODE and DATA on a Paging I/O path for a
> Mapped File because both of them are actually present on the PAGE FILE.
> Even the Kernel code is backed by the PAGE FILE. Hence, if we call a
> function that can be called at IRQL <= APC_LEVEL, in a Paging I/O path for
> a MAPPED FILE, and a page fault occurs, the system will issue another
> paging i/o ( this time targeted to the PAGE FILE ) and fetch the page.
>
> However, even in paging I/O path targeted to a MAPPED FILE, we cannot
> issue file system calls because of the TopLevelIrp component. From IRQL
> perspective, it is safe to call FltAllocateCallbackData and
> FltPerformSynchronousIo and FltPerformAsynchronousIo, but due to the
> locking mechanism of the FSD, CM and VMM ( in short TopLevelIrp ! =
> NULL ), it is not safe to call these functions. Otherwise the FSD will try
> to acquire the locks again when i issue a File system request and as such
> it has previously acquired those locks. So, this attempt to re-acquire
> acquired locks will lead to a deadlock…
> Right?
>
> -K. Dev.
>

Excellent work, Slava. This is a tough one to explain under the best of
circumstances, which you did commendably, despite some adversity.

mm

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Slava Imameyev
Sent: Thursday, August 02, 2007 06:48
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] RE:Why does blocking the Paging I/O request lead to
deadlock?

If i am correct you mean that we can generate page fault for both
CODE
and DATA on a Paging I/O path for a Mapped File because both of them
are
actually present on the PAGE FILE. Even the Kernel code is backed by
the
PAGE FILE. Hence, if we call a function that can be called at IRQL <=
APC_LEVEL, in a Paging I/O path for a MAPPED FILE, and a page fault
occurs, the system will issue another paging i/o ( this time targeted
to
the PAGE FILE ) and fetch the page.

Yes, absolutely correct.

However, even in paging I/O path targeted to a MAPPED FILE, we cannot
issue file system calls because of the TopLevelIrp component.

Yes, especially for paging write issued by Mapped Page Writer and Cache
Manager’s lazy writer thread.

…From IRQL perspective, it is safe to call FltAllocateCallbackData
and
FltPerformSynchronousIo and FltPerformAsynchronousIo, but due to the
locking mechanism of the FSD, CM and VMM ( in short TopLevelIrp ! =
NULL ), it is not safe to call these functions.

Yes, correct.

Otherwise the FSD will try to acquire the locks again when i issue a
File
system request and as such it has previously acquired those locks. So,

this attempt to re-acquire acquired locks will lead to a deadlock…
Right?

Yes, correct. This is the main point in understanding of a subtle nature
of
this deadlock!


Slava Imameyev, xxxxx@hotmail.com

“Kernel Developer” wrote in message
news:xxxxx@ntfsd…
> Thanks Slava!
> Now, I am understanding a bit. If i am correct you mean that we can
> generate page fault for both CODE and DATA on a Paging I/O path for a
> Mapped File because both of them are actually present on the PAGE
FILE.
> Even the Kernel code is backed by the PAGE FILE. Hence, if we call a
> function that can be called at IRQL <= APC_LEVEL, in a Paging I/O path
for
> a MAPPED FILE, and a page fault occurs, the system will issue another
> paging i/o ( this time targeted to the PAGE FILE ) and fetch the page.
>
> However, even in paging I/O path targeted to a MAPPED FILE, we cannot
> issue file system calls because of the TopLevelIrp component. From
IRQL
> perspective, it is safe to call FltAllocateCallbackData and
> FltPerformSynchronousIo and FltPerformAsynchronousIo, but due to the
> locking mechanism of the FSD, CM and VMM ( in short TopLevelIrp ! =
> NULL ), it is not safe to call these functions. Otherwise the FSD will
try
> to acquire the locks again when i issue a File system request and as
such
> it has previously acquired those locks. So, this attempt to re-acquire

> acquired locks will lead to a deadlock…
> Right?
>
> -K. Dev.
>


NTFSD is sponsored by OSR

For our schedule debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

You are currently subscribed to ntfsd as: xxxxx@evitechnology.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Thanks Slava!!

Actually i am still in the study and analysis phase of my final year project
in college.

Now I know that meeting this requirement of First writing into log
file and then allowing the original paging write to proceed is almost
impossible.

So, now i have some questions:

Suppose instead of all this, i queue a work item that writes the buffer in
my log file. How much should i care about the fact that a system crash
occurs and i was not able to write something to my log file? Is it fine for
a real life product?

How is this situation handled in real life products?

Regards!
-K. Dev.

Hi,
you should also take care of your log file. You should close it in case of
volume lock or dismount. Look at metadatamanager sample in wdk.

Jan

“Kernel Developer” wrote in message
news:xxxxx@ntfsd…
> Thanks Slava!!
>
> Actually i am still in the study and analysis phase of my final year
project
> in college.
>
> Now I know that meeting this requirement of First writing into log
> file and then allowing the original paging write to proceed is almost
> impossible.
>
> So, now i have some questions:
>
> Suppose instead of all this, i queue a work item that writes the buffer in
> my log file. How much should i care about the fact that a system crash
> occurs and i was not able to write something to my log file? Is it fine
for
> a real life product?
>
> How is this situation handled in real life products?
>
> Regards!
> -K. Dev.
>
>

>>>you should also take care of your log file. You should close it in case of
volume lock or dismount. Look at metadatamanager sample in wdk.<<<

I have already planned to do this.
The whole situation is that there is no way to ensure that Write to my log file happens before the write to the actual file.

And for some reasons, the data present in the log is critical. For example, i store extra information in log file that i finally purge into the original file when it is closed. And this information is critical.

My concern is that if i am not able to write to my log file and the system crashes. Even though it is a college project, i want it to look professional.

  1. What should i do?
  2. Just say that a system crash had occurred and your files may be in corrupted/ unrecoverable state?
  3. How do you guys handle it in a commercial/ real life product?

Thanks!
-K. Dev.

Rod has given you the hint. Consider creating the log file allocating a
number of blocks to it. Then get the retrieval pointers for the actual
disk blocks for the file. Manage the log yourself writing to the disk
rather than the file system.

This is not anywhere near as simple as just writing a log, but it does
allow you to do the write first, and do the writes in the paging path.


Don Burn (MVP, Windows DDK)
Windows 2k/XP/2k3 Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
Remove StopSpam to reply

“Kernel Developer” wrote in message
news:xxxxx@ntfsd…
>>>you should also take care of your log file. You should close it in case
>>>of
volume lock or dismount. Look at metadatamanager sample in wdk.<<<

I have already planned to do this.
The whole situation is that there is no way to ensure that Write to my log
file happens before the write to the actual file.

And for some reasons, the data present in the log is critical. For example,
i store extra information in log file that i finally purge into the
original file when it is closed. And this information is critical.

My concern is that if i am not able to write to my log file and the system
crashes. Even though it is a college project, i want it to look
professional.

1. What should i do?
2. Just say that a system crash had occurred and your files may be in
corrupted/ unrecoverable state?
3. How do you guys handle it in a commercial/ real life product?

Thanks!
-K. Dev.

> How do you guys handle it in a commercial/ real life product?

As someone who offers consulting services in this field, I would probe at
this question:

And for some reasons, the data present in the log is critical

" Why is it critical? What are you protecting against? Are you trying to
set up some sort of trasactional section?
Is it a data recovery issue? You have given me a functional requirement,
what is the user requirment?"

At that stage, if the response is of the “Because Mummy Says So” type, one
starts discussing the costs (as I said, there is nothing stopping you from
writing an FSD just for your logs). Alternatively the customer sheds
another layer of mistrust and discusses what they are trying to achieve.

If you are employed and it’s your boss asking the same question you follow
the same sort of conversation, but instead of the $ costs you give your boss
a response in terms of how long it will take to do…

In your situation I’d start at the end and ask your professor what the user
requirements are…

Good luck

Rod

“Kernel Developer” wrote in message
news:xxxxx@ntfsd…
>>>you should also take care of your log file. You should close it in case
>>>of
volume lock or dismount. Look at metadatamanager sample in wdk.<<<

I have already planned to do this.
The whole situation is that there is no way to ensure that Write to my log
file happens before the write to the actual file.

And for some reasons, the data present in the log is critical. For example,
i store extra information in log file that i finally purge into the original
file when it is closed. And this information is critical.

My concern is that if i am not able to write to my log file and the system
crashes. Even though it is a college project, i want it to look
professional.

1. What should i do?
2. Just say that a system crash had occurred and your files may be in
corrupted/ unrecoverable state?
3.

Thanks!
-K. Dev.