Assertion failed in cachesub.c

I am writing an IFS for the Linux ext2 fs. I am making good progress
but have hit a problem.

I have a lot of trouble reading the data of this disk. I have managed
to get it working and I am using the Pininning interface to map in the
metadata for the fs. This works until I try and pin past anything past
the 256k mark at which point I get the following
*** Assertion failed: ( FileOffset->QuadPart + (LONGLONG)Length ) <=
SharedCacheMap->SectionSize.QuadPart
*** Source File: w:\nt\private\ntos\cache\up..\cachesub.c, line 256

Now FileOffset->QuadPart and Length are correct so I assume the
SharedCacheMap->SectionSize is set to 256k. I assume this is maintained
by the cache manager so how do I extend it (if I need to)?

When I create the initialise caching I set the FileSize to the
appropriate size of 2113896448 and I have tried using CcSetFileSizes but
get the same failure.

Any help/advice would be much appreciated.

John.

http://uranus.it.swin.edu.au/~jn/

This is probably because of the same bug in Cc I’ve found last week.


>>>>> TO ALL DEVELOPERS TO SAVE THEIR TIME AND HEADACHES <<<<<<


Routine CcInitializeCacheMap currently has a bug in itself with
determining size of the section to create.
It takes the SectionSize as a FileSizes->AllocationSize aligned
to next 256 KB boundary.
It does not make any checks if the FileSize->AllocationSize isn’t
less than FileSizes->FileSize (which is really correct and true
almost for compressed files).
NTFS solves this problem by maintaing INVALID AllocationSize in the
FSRTL_COMMON_FCB_HEADER (which is the same which is passed to the
CcInitializeCacheMap) and holds the REAL AllocationSize out of the
FSRTL_COMMON_FCB_HEADER.

So I mean there is a BUG in the CcInitializeCacheSection.
Its repairing should be very easy :
simply condition like this before aligning the SectionSize

if (SectionSize.QuadPart < FileSizes->FileSize.QuadPart)
{
SectionSize = FileSizes->FileSize;
}

Paul

PS: Guys at Microsoft:
Please let me (and all the developers) know the reason
why this is like it currently is.

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of John Newbigin
Sent: Tuesday, October 10, 2000 3:06 PM
To: File Systems Developers
Subject: [ntfsd] Assertion failed in cachesub.c

I am writing an IFS for the Linux ext2 fs. I am making good progress
but have hit a problem.

I have a lot of trouble reading the data of this disk. I have managed
to get it working and I am using the Pininning interface to map in the
metadata for the fs. This works until I try and pin past anything past
the 256k mark at which point I get the following
*** Assertion failed: ( FileOffset->QuadPart + (LONGLONG)Length ) <=
SharedCacheMap->SectionSize.QuadPart
*** Source File: w:\nt\private\ntos\cache\up..\cachesub.c, line 256

Now FileOffset->QuadPart and Length are correct so I assume the
SharedCacheMap->SectionSize is set to 256k. I assume this is maintained
by the cache manager so how do I extend it (if I need to)?

When I create the initialise caching I set the FileSize to the
appropriate size of 2113896448 and I have tried using CcSetFileSizes but
get the same failure.

Any help/advice would be much appreciated.

John.

http://uranus.it.swin.edu.au/~jn/


You are currently subscribed to ntfsd as: xxxxx@compelson.com
To unsubscribe send a blank email to $subst(‘Email.Unsub’)

The VM system has worked this way since (at least) 3.1. I know that I talk
about this behavior (the allocation size, file size, and valid data length)
in my file systems class.

When the VM system creates a section (and this is the Memory Manager) it
relies upon the Allocation Size to establish the size of the section. The
file size then defines the end of the defined data region (so anything
beyond that file size is zeroed before the page is mapped into an
application’s address space.) Thus, you could start out with a file that
has a LARGE “allocation size” but no data. The VM system would then allow
the application to write into the mapped region, all the way up to the full
size of the section.

The “bug” here is that you are relying upon the English meaning of the word
“Allocation Size”. As is frequently the case in programming, we choose
mnemonic names for variables but later the use of those variables becomes
inconsistent with the mnemonic name. Witness my favorite victim for this
particular point: Fast I/O (which, may not be fast and frequently has
NOTHING whatsoever to do with I/O.) It is unfortunate, and it makes
programming file systems more difficult, but relying upon the name of a
variable as a form of documentation about how the variable is used is
unreliable.

Or, I suppose one could argue that your interpretation of “AllocationSize”
is incorrect - that this has to do with the amount of section space that
must be allocated to contain the file, not the file system’s allocation of
space. I think that’s a fairly poor alternative, but to call it a “bug”
simply because it doesn’t work the way you think it works is also quite
harsh.

In Windows 2000 there is an API that allows application programs to query
the amount of on-disk storage allocated for a file; this is unrelated to the
value stored in the AllocationSize field of the common header.

So, the rule I use is:

AllocationSize >= FileSize >= ValidDataLength

You can “get away” with ValidDataLength > FileSize, but it causes
unnecessary page fault behavior. You cannot, however, “get away” with
AllocationSize < FileSize. That eventually leads to problems, as you have a
section (a container of information in the VM system) that is smaller than
the amount of data stored within it (the FileSize.) It is like trying to
put 10 kilos of rocks in a 5 kilo bag…

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com http:</http:>

I agree with your explanation but some things are then unclear to me.

Why the CcInitializeCacheMap could then accept parameter where
AllocationSize < FileSize in FileSizes structure ?
This implies later page fault in MmMapViewInSystemCache in the following
scenario.

Let the
ValidDataLength = FileSize
and
ALIGN_UP(AllocationSize, VACB_MAPPING_GRANULARITY) <
ALIGN_UP(FileSize, VACB_MAPPING_GRANULARITY)

Then if you try to access the region in cache whose any part is
remaining
above the aligned AllocationSize (which is correct because there must be
valid data in the cache because this offset is still less than or equal
to FileSize
and ValidDataLength) the page fault in MmMapViewInSystemCache occurs.

Explanation:

The cache manager tries to map tve VACB for this region to the system
cache address space and thus calls the MmMapViewInSystemSpace.
This routine performs some checks (but unfortunately not SectionOffset

  • ViewSize must be less than SectionSize) and then tries to reach the
    SUBSECTION, mapping the region, starting in CONTROL_AREA.
    But the last subsection has the next subsection pointer NULL and
    thus the next iteration of the search loop will try to access offset
    in the range 0 to sizeof(SUBSECTION). And this must always be below
    MM_LOWEST_USER_ADDRESS, thus generating page fault.

So missing of the lines in code I have already shown in the previous
mail
(or at least check and raise some invalid status) I must call
programmer’s error
and thus BUG.

Paul

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of Tony Mason
Sent: Tuesday, October 10, 2000 3:32 PM
To: File Systems Developers
Subject: [ntfsd] RE: Assertion failed in cachesub.c

The VM system has worked this way since (at least) 3.1. I know that I
talk
about this behavior (the allocation size, file size, and valid data
length)
in my file systems class.

When the VM system creates a section (and this is the Memory Manager) it
relies upon the Allocation Size to establish the size of the section.
The
file size then defines the end of the defined data region (so anything
beyond that file size is zeroed before the page is mapped into an
application’s address space.) Thus, you could start out with a file
that
has a LARGE “allocation size” but no data. The VM system would then
allow
the application to write into the mapped region, all the way up to the
full
size of the section.

The “bug” here is that you are relying upon the English meaning of the
word
“Allocation Size”. As is frequently the case in programming, we choose
mnemonic names for variables but later the use of those variables
becomes
inconsistent with the mnemonic name. Witness my favorite victim for
this
particular point: Fast I/O (which, may not be fast and frequently has
NOTHING whatsoever to do with I/O.) It is unfortunate, and it makes
programming file systems more difficult, but relying upon the name of a
variable as a form of documentation about how the variable is used is
unreliable.

Or, I suppose one could argue that your interpretation of
“AllocationSize”
is incorrect - that this has to do with the amount of section space that
must be allocated to contain the file, not the file system’s allocation
of
space. I think that’s a fairly poor alternative, but to call it a “bug”
simply because it doesn’t work the way you think it works is also quite
harsh.

In Windows 2000 there is an API that allows application programs to
query
the amount of on-disk storage allocated for a file; this is unrelated to
the
value stored in the AllocationSize field of the common header.

So, the rule I use is:

AllocationSize >= FileSize >= ValidDataLength

You can “get away” with ValidDataLength > FileSize, but it causes
unnecessary page fault behavior. You cannot, however, “get away” with
AllocationSize < FileSize. That eventually leads to problems, as you
have a
section (a container of information in the VM system) that is smaller
than
the amount of data stored within it (the FileSize.) It is like trying
to
put 10 kilos of rocks in a 5 kilo bag…

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com http:


You are currently subscribed to ntfsd as: xxxxx@compelson.com
To unsubscribe send a blank email to $subst(‘Email.Unsub’)</http:>

Why are all of you interested in this topic still silent ?
?

Tony is correct as usual.

The only thing I can add to this discussion is that kernel components
will frequently *not* assert that their callers are in error. To do so
would add unnecessary overhead to the retail (debugged) system, and is
why the checked system is provided. Ommision of the safety net is in no
way a bug, we assume correctness. This is the kernel.

This is an excellent example of where a kernel verifier layer for Cc
would be useful.

-----Original Message-----
From: Hrdina Pavel [mailto:xxxxx@COMPELSON.COM]
Sent: Tuesday, October 10, 2000 7:22 AM
To: File Systems Developers
Subject: [ntfsd] RE: Assertion failed in cachesub.c
Importance: High

I agree with your explanation but some things are then unclear to me.

Why the CcInitializeCacheMap could then accept parameter where
AllocationSize < FileSize in FileSizes structure ?
This implies later page fault in MmMapViewInSystemCache in the following

scenario.

Let the
ValidDataLength = FileSize
and
ALIGN_UP(AllocationSize, VACB_MAPPING_GRANULARITY) <
ALIGN_UP(FileSize, VACB_MAPPING_GRANULARITY)

Then if you try to access the region in cache whose any part is
remaining
above the aligned AllocationSize (which is correct because there must be

valid data in the cache because this offset is still less than or equal
to FileSize
and ValidDataLength) the page fault in MmMapViewInSystemCache occurs.

Explanation:

The cache manager tries to map tve VACB for this region to the system
cache address space and thus calls the MmMapViewInSystemSpace.
This routine performs some checks (but unfortunately not SectionOffset

  • ViewSize must be less than SectionSize) and then tries to reach the
    SUBSECTION, mapping the region, starting in CONTROL_AREA.
    But the last subsection has the next subsection pointer NULL and
    thus the next iteration of the search loop will try to access offset
    in the range 0 to sizeof(SUBSECTION). And this must always be below
    MM_LOWEST_USER_ADDRESS, thus generating page fault.

So missing of the lines in code I have already shown in the previous
mail
(or at least check and raise some invalid status) I must call
programmer’s error
and thus BUG.

Paul

-----Original Message-----
From: xxxxx@lists.osr.com [
mailto:xxxxx@lists.osr.com]On Behalf Of Tony Mason
Sent: Tuesday, October 10, 2000 3:32 PM
To: File Systems Developers
Subject: [ntfsd] RE: Assertion failed in cachesub.c

The VM system has worked this way since (at least) 3.1. I know that I
talk
about this behavior (the allocation size, file size, and valid data
length)
in my file systems class.

When the VM system creates a section (and this is the Memory Manager) it

relies upon the Allocation Size to establish the size of the section.
The
file size then defines the end of the defined data region (so anything
beyond that file size is zeroed before the page is mapped into an
application’s address space.) Thus, you could start out with a file
that
has a LARGE “allocation size” but no data. The VM system would then
allow
the application to write into the mapped region, all the way up to the
full
size of the section.

The “bug” here is that you are relying upon the English meaning of the
word
“Allocation Size”. As is frequently the case in programming, we choose
mnemonic names for variables but later the use of those variables
becomes
inconsistent with the mnemonic name. Witness my favorite victim for
this
particular point: Fast I/O (which, may not be fast and frequently has
NOTHING whatsoever to do with I/O.) It is unfortunate, and it makes
programming file systems more difficult, but relying upon the name of a
variable as a form of documentation about how the variable is used is
unreliable.

Or, I suppose one could argue that your interpretation of
“AllocationSize”
is incorrect - that this has to do with the amount of section space that

must be allocated to contain the file, not the file system’s allocation
of
space. I think that’s a fairly poor alternative, but to call it a “bug”

simply because it doesn’t work the way you think it works is also quite
harsh.

In Windows 2000 there is an API that allows application programs to
query
the amount of on-disk storage allocated for a file; this is unrelated to
the
value stored in the AllocationSize field of the common header.

So, the rule I use is:

AllocationSize >= FileSize >= ValidDataLength

You can “get away” with ValidDataLength > FileSize, but it causes
unnecessary page fault behavior. You cannot, however, “get away” with
AllocationSize < FileSize. That eventually leads to problems, as you
have a
section (a container of information in the VM system) that is smaller
than
the amount of data stored within it (the FileSize.) It is like trying
to
put 10 kilos of rocks in a 5 kilo bag…

Regards,

Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com < http://www.osr.com>


You are currently subscribed to ntfsd as: xxxxx@compelson.com
To unsubscribe send a blank email to $subst(‘Email.Unsub’)

Hi,

I also experienced this problem. My understanding of the problem is that
memory manager allocates virtual memory by
256K pieces, so you can’t map of the file into memory that cross 256K
boundary. You have to split file into
parts that do not cross this boundary and pin them separately. I handle
free space bitmap in my file system this way, it is inconvenient, but I
didn’t find a better way. FAT file system also do the same in movefile
IOCTL. I never had any problems with CcSetFileSizes for big files.

Best regards,
Alexei.


From: John Newbigin
> To: File Systems Developers
> Subject: [ntfsd] Assertion failed in cachesub.c
> Date: Tuesday, October 10, 2000 6:05 AM
>
> I am writing an IFS for the Linux ext2 fs. I am making good progress
> but have hit a problem.
>
> I have a lot of trouble reading the data of this disk. I have managed
> to get it working and I am using the Pininning interface to map in the
> metadata for the fs. This works until I try and pin past anything past
> the 256k mark at which point I get the following
> Assertion failed: ( FileOffset->QuadPart + (LONGLONG)Length ) <=
> SharedCacheMap->SectionSize.QuadPart
>
Source File: w:\nt\private\ntos\cache\up..\cachesub.c, line 256
>
> Now FileOffset->QuadPart and Length are correct so I assume the
> SharedCacheMap->SectionSize is set to 256k. I assume this is maintained
> by the cache manager so how do I extend it (if I need to)?
>
> When I create the initialise caching I set the FileSize to the
> appropriate size of 2113896448 and I have tried using CcSetFileSizes but
> get the same failure.
>
> Any help/advice would be much appreciated.
>
> John.
> –
> http://uranus.it.swin.edu.au/~jn/
>
>
> —
> You are currently subscribed to ntfsd as: xxxxx@mondenet.com
> To unsubscribe send a blank email to $subst(‘Email.Unsub’)

I have solved the problem, indeed I was passing in a value for
AllocationSize that was too small. The thing that threw me was the
rounding up to 256k which made it look like the problem was not related
to the value I was passing in.

Thanks for your help.

John.

Hrdina Pavel wrote:

This is probably because of the same bug in Cc I’ve found last week.


>>>>>> TO ALL DEVELOPERS TO SAVE THEIR TIME AND HEADACHES <<<<<<

Routine CcInitializeCacheMap currently has a bug in itself with
determining size of the section to create.
It takes the SectionSize as a FileSizes->AllocationSize aligned
to next 256 KB boundary.
It does not make any checks if the FileSize->AllocationSize isn’t
less than FileSizes->FileSize (which is really correct and true
almost for compressed files).
NTFS solves this problem by maintaing INVALID AllocationSize in the
FSRTL_COMMON_FCB_HEADER (which is the same which is passed to the
CcInitializeCacheMap) and holds the REAL AllocationSize out of the
FSRTL_COMMON_FCB_HEADER.

So I mean there is a BUG in the CcInitializeCacheSection.
Its repairing should be very easy :
simply condition like this before aligning the SectionSize

if (SectionSize.QuadPart < FileSizes->FileSize.QuadPart)
{
SectionSize = FileSizes->FileSize;
}

Paul

PS: Guys at Microsoft:
Please let me (and all the developers) know the reason
why this is like it currently is.

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of John Newbigin
Sent: Tuesday, October 10, 2000 3:06 PM
To: File Systems Developers
Subject: [ntfsd] Assertion failed in cachesub.c

I am writing an IFS for the Linux ext2 fs. I am making good progress
but have hit a problem.

I have a lot of trouble reading the data of this disk. I have managed

to get it working and I am using the Pininning interface to map in the

metadata for the fs. This works until I try and pin past anything
past
the 256k mark at which point I get the following
*** Assertion failed: ( FileOffset->QuadPart + (LONGLONG)Length ) <=
SharedCacheMap->SectionSize.QuadPart
*** Source File: w:\nt\private\ntos\cache\up..\cachesub.c, line 256

Now FileOffset->QuadPart and Length are correct so I assume the
SharedCacheMap->SectionSize is set to 256k. I assume this is
maintained
by the cache manager so how do I extend it (if I need to)?

When I create the initialise caching I set the FileSize to the
appropriate size of 2113896448 and I have tried using CcSetFileSizes
but
get the same failure.

Any help/advice would be much appreciated.

John.

http://uranus.it.swin.edu.au/~jn/


You are currently subscribed to ntfsd as: xxxxx@compelson.com
To unsubscribe send a blank email to $subst(‘Email.Unsub’)


Information Technology Innovation Group
Swinburne University. Melbourne, Australia
http://uranus.it.swin.edu.au/~jn