random write causing excessive paging

Dear Developers

I have a problem in my application that performs a benchmarking of a
storage device. The performance is calculated by writing a file to the disk
from an application.

Steps I follow:

  1. First I write to the starting offset of the file with some 64 KBytes.
  2. Then I calculate some random block that is not after 64 KB offset, and
    set the file position and try to write to that block.

If I write the file sequentially I am not seeing any Paged Write, but when
I write to some random locations in the file in the disk, then I am seeing
the paged writes from the start of the file to the point that I have just
written (I monitored using the FileMon from Sysinternals). Is this expected
behaviour? This is causing excessive delay to my benchmarks in WIN XP.

Is there any way I can work this problem around. I tried
FILE_NO_INTERMEDIATE_BUFFERING and FILE_RANDOM_ACCESS when I am calling
CreateFile (…) flag with no help.

Best Regards,

Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)

I take it this is an NTFS file system? NTFS has a concept called valid data length which represents the offset of the last byte written in the file. This is distinct from end of file which is greater than or equal to valid data length.

Basically, valid data length provides a lazy evaluation mechanism whereby the file won’t be zero filled until absolutely necessary. If you try to read a region of the file between the valid data length and the end of file, the system will automatically zero fill that portion of your buffer. If you try to write to a region of the file beyond the valid data length, the system will zero fill the portion of the file between the former valid data length and the beginning of your write (This is what causes the additional paging writes you see.)

Perhaps your benchmark program needs to initialize the file with a pattern before beginning the test? If you want to avoid warming the cache during this phase, you will need an option such as FILE_NO_INTERMEDIATE_BUFFERING.

-----Original Message-----
From: xxxxx@scmmicro.co.in [mailto:xxxxx@scmmicro.co.in]
Sent: Monday, July 22, 2002 8:53 AM
To: File Systems Developers
Subject: [ntfsd] random write causing excessive paging

Dear Developers

I have a problem in my application that performs a benchmarking of a
storage device. The performance is calculated by writing a file to the disk
from an application.

Steps I follow:

  1. First I write to the starting offset of the file with some 64 KBytes.
  2. Then I calculate some random block that is not after 64 KB offset, and
    set the file position and try to write to that block.

If I write the file sequentially I am not seeing any Paged Write, but when
I write to some random locations in the file in the disk, then I am seeing
the paged writes from the start of the file to the point that I have just
written (I monitored using the FileMon from Sysinternals). Is this expected
behaviour? This is causing excessive delay to my benchmarks in WIN XP.

Is there any way I can work this problem around. I tried
FILE_NO_INTERMEDIATE_BUFFERING and FILE_RANDOM_ACCESS when I am calling
CreateFile (…) flag with no help.

Best Regards,

Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)


You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%

Rob

Thank you for the reply. The file system I am using in FAT32 and the OS is
WIN XP.

I was able to find the FILE_NO_INTERMEDIATE_BUFFERING only in the DDK and
not in the MSDN, but even then I tried it with the createfile after the
value locally. But I am still seeing the trashing. I also used the
FILE_RANDOM_ACCESS flag for no use.

Initialising the file before the benchmarking is a pain, as this will
writing a file and then overwriting the file. Is there some way atleast we
can calculate the paged write overheads.

Best Regards,

Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)

“Fuller, Rob”
To: “File Systems Developers”
Sent by:
xxxxx@lis cc:
ts.osr.com Subject: [ntfsd] RE: random write causing
excessive paging

07/23/2002 01:37 AM
Please respond to
“File Systems
Developers”

I take it this is an NTFS file system? NTFS has a concept called valid
data length which represents the offset of the last byte written in the
file. This is distinct from end of file which is greater than or equal to
valid data length.

Basically, valid data length provides a lazy evaluation mechanism whereby
the file won’t be zero filled until absolutely necessary. If you try to
read a region of the file between the valid data length and the end of
file, the system will automatically zero fill that portion of your buffer.
If you try to write to a region of the file beyond the valid data length,
the system will zero fill the portion of the file between the former valid
data length and the beginning of your write (This is what causes the
additional paging writes you see.)

Perhaps your benchmark program needs to initialize the file with a pattern
before beginning the test? If you want to avoid warming the cache during
this phase, you will need an option such as FILE_NO_INTERMEDIATE_BUFFERING.

-----Original Message-----
From: xxxxx@scmmicro.co.in [mailto:xxxxx@scmmicro.co.in]
Sent: Monday, July 22, 2002 8:53 AM
To: File Systems Developers
Subject: [ntfsd] random write causing excessive paging

Dear Developers

I have a problem in my application that performs a benchmarking of a
storage device. The performance is calculated by writing a file to the disk
from an application.

Steps I follow:

1. First I write to the starting offset of the file with some 64 KBytes.
2. Then I calculate some random block that is not after 64 KB offset, and
set the file position and try to write to that block.

If I write the file sequentially I am not seeing any Paged Write, but when
I write to some random locations in the file in the disk, then I am seeing
the paged writes from the start of the file to the point that I have just
written (I monitored using the FileMon from Sysinternals). Is this expected
behaviour? This is causing excessive delay to my benchmarks in WIN XP.

Is there any way I can work this problem around. I tried
FILE_NO_INTERMEDIATE_BUFFERING and FILE_RANDOM_ACCESS when I am calling
CreateFile (…) flag with no help.

Best Regards,
---------------------------------------
Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)


You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%


You are currently subscribed to ntfsd as: xxxxx@scmmicro.co.in
To unsubscribe send a blank email to %%email.unsub%%

Although FAT doesn’t implement the concept of valid data length, you may see similar paging behavior on FAT. In the case of FAT, if you request a write starting beyond the end of file marker, then FAT will zero the region of the file between the old end of file marker and the beginning of your write.

If you perform your sequential test first, you won’t have this problem. Of course there are ways to measure the paging IO, but nothing exactly trivial.

-----Original Message-----
From: xxxxx@scmmicro.co.in [mailto:xxxxx@scmmicro.co.in]
Sent: Monday, July 22, 2002 11:00 PM
To: File Systems Developers
Subject: [ntfsd] RE: random write causing excessive paging

Rob

Thank you for the reply. The file system I am using in FAT32 and the OS is
WIN XP.

I was able to find the FILE_NO_INTERMEDIATE_BUFFERING only in the DDK and
not in the MSDN, but even then I tried it with the createfile after the
value locally. But I am still seeing the trashing. I also used the
FILE_RANDOM_ACCESS flag for no use.

Initialising the file before the benchmarking is a pain, as this will
writing a file and then overwriting the file. Is there some way atleast we
can calculate the paged write overheads.

Best Regards,

Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)

“Fuller, Rob”
To: “File Systems Developers”
Sent by:
xxxxx@lis cc:
ts.osr.com Subject: [ntfsd] RE: random write causing
excessive paging

07/23/2002 01:37 AM
Please respond to
“File Systems
Developers”

I take it this is an NTFS file system? NTFS has a concept called valid
data length which represents the offset of the last byte written in the
file. This is distinct from end of file which is greater than or equal to
valid data length.

Basically, valid data length provides a lazy evaluation mechanism whereby
the file won’t be zero filled until absolutely necessary. If you try to
read a region of the file between the valid data length and the end of
file, the system will automatically zero fill that portion of your buffer.
If you try to write to a region of the file beyond the valid data length,
the system will zero fill the portion of the file between the former valid
data length and the beginning of your write (This is what causes the
additional paging writes you see.)

Perhaps your benchmark program needs to initialize the file with a pattern
before beginning the test? If you want to avoid warming the cache during
this phase, you will need an option such as FILE_NO_INTERMEDIATE_BUFFERING.

-----Original Message-----
From: xxxxx@scmmicro.co.in [mailto:xxxxx@scmmicro.co.in]
Sent: Monday, July 22, 2002 8:53 AM
To: File Systems Developers
Subject: [ntfsd] random write causing excessive paging

Dear Developers

I have a problem in my application that performs a benchmarking of a
storage device. The performance is calculated by writing a file to the disk
from an application.

Steps I follow:

1. First I write to the starting offset of the file with some 64 KBytes.
2. Then I calculate some random block that is not after 64 KB offset, and
set the file position and try to write to that block.

If I write the file sequentially I am not seeing any Paged Write, but when
I write to some random locations in the file in the disk, then I am seeing
the paged writes from the start of the file to the point that I have just
written (I monitored using the FileMon from Sysinternals). Is this expected
behaviour? This is causing excessive delay to my benchmarks in WIN XP.

Is there any way I can work this problem around. I tried
FILE_NO_INTERMEDIATE_BUFFERING and FILE_RANDOM_ACCESS when I am calling
CreateFile (…) flag with no help.

Best Regards,
---------------------------------------
Britto.E.V (Engineer - Software)
SCM Microsystems India (P) Ltd., Chennai.

“An inconvenience is only an adventure wrongly considered; an adventure is
an inconvenience rightly considered.” - Gilbert Keith Chesterton
(1874-1936)


You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%


You are currently subscribed to ntfsd as: xxxxx@scmmicro.co.in
To unsubscribe send a blank email to %%email.unsub%%


You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%