Is it possible for a user mode application to allocate space on an NTFS
volume for a file but to set it’s valid data length (not EOF) to something
less than the amount of storage allocated?
The purpose would be to insure that enough space is available for eventual
writing but to be able to have the file size set correctly when/if the
system crashes.
Our user-mode server gets information from our FSD about the file that is
being written. In order to make sure that there is physically enough space
to write the file to NTFS, we allocate space on the real volume by setting
the end of file to whatever we want to allocate not the amount of data we
might write.
For example, a 1 byte write will result in a 4096 byte file on disk.
Eventually, a set end of file will reduce the file to one byte. But if the
system crashes before the set EOF, we will be left with a file that is
4096 bytes when we come back up.
TIA
GAP
You can’t do this with the WIN32 API. You can set the allocation size with the native API’s using ZwSetInformationFile with FILE_INFORMATION_CLASS of type FileAllocationInformation and FileInformation pointing to FILE_ALLOCATION_INFORMATION. Another option is to call ZwCreateFile with the optional AllocationSize information. See your DDK documentation for more details.
-----Original Message-----
From: Greg Pearce [mailto:xxxxx@filetek.com]
Sent: Tuesday, December 17, 2002 10:01 AM
To: File Systems Developers
Subject: [ntfsd] user mode set valid data length?
Is it possible for a user mode application to allocate space on an NTFS
volume for a file but to set it’s valid data length (not EOF) to something
less than the amount of storage allocated?
The purpose would be to insure that enough space is available for eventual
writing but to be able to have the file size set correctly when/if the
system crashes.
Our user-mode server gets information from our FSD about the file that is
being written. In order to make sure that there is physically enough space
to write the file to NTFS, we allocate space on the real volume by setting
the end of file to whatever we want to allocate not the amount of data we
might write.
For example, a 1 byte write will result in a 4096 byte file on disk.
Eventually, a set end of file will reduce the file to one byte. But if the
system crashes before the set EOF, we will be left with a file that is
4096 bytes when we come back up.
TIA
GAP
You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%
“Greg Pearce” wrote in message news:xxxxx@ntfsd…
>
> Is it possible for a user mode application to allocate space on an NTFS
> volume for a file but to set it’s valid data length (not EOF) to something
> less than the amount of storage allocated?
Why? Unless it’s a compressed file, you will have at least 1 allocation
unit (defined when you formatted the volume), which is at least 512 bytes.
Even if it is compressed, I think there is a minimum allocation when you
create a file, and I have a vague recollection that there is a minimum size
below which it is stored uncompressed.
>
> The purpose would be to insure that enough space is available for eventual
> writing but to be able to have the file size set correctly when/if the
> system crashes.
It already does this.
>
> Our user-mode server gets information from our FSD about the file that is
> being written. In order to make sure that there is physically enough space
> to write the file to NTFS, we allocate space on the real volume by setting
> the end of file to whatever we want to allocate not the amount of data we
> might write.
>
> For example, a 1 byte write will result in a 4096 byte file on disk.
> Eventually, a set end of file will reduce the file to one byte. But if the
> system crashes before the set EOF, we will be left with a file that is
> 4096 bytes when we come back up.
The space allocation will always grow in Allocation Size increments,
regardless of the change in file size. Assume an NTFS formatted volume with
default 4K allocation units. For your hypothetical 1 byte file, you have
4096 bytes reserved just for your file’s data by NTFS. If you grow the file
data to 4097 bytes, you have 8192 bytes reserved by NTFS.
Hope this helps.
Phil
–
Philip D. Barila
Seagate Technology, LLC
(720) 684-1842
On Tue, Dec 17, 2002 at 10:16:25AM -0700, Phil Barila wrote:
The space allocation will always grow in Allocation Size increments,
regardless of the change in file size. Assume an NTFS formatted volume with
default 4K allocation units. For your hypothetical 1 byte file, you have
4096 bytes reserved just for your file’s data by NTFS. If you grow the file
data to 4097 bytes, you have 8192 bytes reserved by NTFS.
I was just wondering if this is true for files whose data fits
entirely within the MFT entry for that file? Ie. does NTFS reserve a
4K block/cluster for the file *in case it might* exceed the space available
in the MFT entry, or is the block/cluster allocated *after it exceeds*
the space in the MFT entry.
Thanks,
-nick
No, Phil is simplifying for the sake of the example. There isn’t a hard
filesize that always fits in the file record since the number of
attributes on a file affects the free space within it, but if you were
to guesstimate that your average 7-800 byte file would tend to fit,
you’d be right most of the time.
Allocation occurs during the operation that causes the file to grow
outside the free space available in the file record.
This posting is provided “AS IS” with no warranties, and confers no
rights
-----Original Message-----
From: Nicholas Kidd [mailto:kidd@cs.wisc.edu]
Sent: Tuesday, December 17, 2002 11:40 AM
To: File Systems Developers
On Tue, Dec 17, 2002 at 10:16:25AM -0700, Phil Barila wrote:
The space allocation will always grow in Allocation Size increments,
regardless of the change in file size. Assume an NTFS formatted
volume with
default 4K allocation units. For your hypothetical 1 byte file, you
have
4096 bytes reserved just for your file’s data by NTFS. If you grow
the file
data to 4097 bytes, you have 8192 bytes reserved by NTFS.
I was just wondering if this is true for files whose data fits
entirely within the MFT entry for that file? Ie. does NTFS reserve a
4K block/cluster for the file *in case it might* exceed the space
available
in the MFT entry, or is the block/cluster allocated *after it exceeds*
the space in the MFT entry.
Thanks,
-nick
You are currently subscribed to ntfsd as: xxxxx@windows.microsoft.com
To unsubscribe send a blank email to %%email.unsub%%
What Daniel said. 
I have no idea what happens when you have a file that doesn’t fit into the
MFT entry, then shrinks until it does. I don’t know if it’s moved to the
MFT, or just left in place. I assume the latter, because the former is
space efficient, but costs you the copy. Daniel?
The point is that you will always have sufficient space unless all the
allocation units on your volume are in use. So trying to outsmart the FS
isn’t gaining you anything. If you are really paranoid, create your file,
set the end however big you want it, then write a byte at the end of it.
That will zero the whole thing, except for what you wrote. Then use your
own end of data mark anywhere inside that space. As far as the FS is
concerned, it’s still as big as you said it was.
Phil
Philip D. Barila
Seagate Technology, LLC
(720) 684-1842
“Daniel Lovinger” wrote in message
news:xxxxx@ntfsd…
[snip]
Allocation occurs during the operation that causes the file to grow
outside the free space available in the file record.
[snip]
Yeah, that’s the tradeoff along with the extra metadata transactions.
The flip side, of course, is that you’d get that cluster back, but files
usually don’t behave in a way that would lead this to be a tremendously
useful. NTFS doesn’t currently do this. If you truncate the file back
down to 1 byte, the file will stay outside the file record.
In fact, you can tell this is happening by using
FSCTL_GET_RETRIEVAL_POINTERS: it’ll return no mapping pairs if the file
is within the file record (or zero length).
This posting is provided “AS IS” with no warranties, and confers no
rights
-----Original Message-----
From: Phil Barila [mailto:xxxxx@Seagate.com]
Sent: Tuesday, December 17, 2002 4:11 PM
To: File Systems Developers
What Daniel said. 
I have no idea what happens when you have a file that doesn’t fit into
the
MFT entry, then shrinks until it does. I don’t know if it’s moved to
the
MFT, or just left in place. I assume the latter, because the former is
space efficient, but costs you the copy. Daniel?
BYTE Zero;
Zero = 0;
lseek(fd, YourSize - 1, SEEK_SET);
write(fd, &Zero, 1);
This will do the thing.
Max
----- Original Message -----
From: “Greg Pearce”
To: “File Systems Developers”
Sent: Tuesday, December 17, 2002 6:01 PM
Subject: [ntfsd] user mode set valid data length?
> Is it possible for a user mode application to allocate space on an
NTFS
> volume for a file but to set it’s valid data length (not EOF) to
something
> less than the amount of storage allocated?
>
> The purpose would be to insure that enough space is available for
eventual
> writing but to be able to have the file size set correctly when/if
the
> system crashes.
>
> Our user-mode server gets information from our FSD about the file
that is
> being written. In order to make sure that there is physically enough
space
> to write the file to NTFS, we allocate space on the real volume by
setting
> the end of file to whatever we want to allocate not the amount of
data we
> might write.
>
> For example, a 1 byte write will result in a 4096 byte file on disk.
> Eventually, a set end of file will reduce the file to one byte. But
if the
> system crashes before the set EOF, we will be left with a file that
is
> 4096 bytes when we come back up.
>
> TIA
> GAP
>
> —
> You are currently subscribed to ntfsd as: xxxxx@storagecraft.com
> To unsubscribe send a blank email to %%email.unsub%%
>
> Why? Unless it’s a compressed file, you will have at least 1
allocation
unit (defined when you formatted the volume), which is at least 512
bytes.
IIRC resident streams are never compressed.
Non-resident compressable streams are treated as a sequence of 64KB
“records”, each compressed independently. The compression must be good
enough to save at least 1 cluster, or the run will be kept
uncompressed.
Compression/decompression is done in low-level IO path of NTFS, below
Cc - so Cc always holds uncompressed data.
Max
> in the MFT entry, or is the block/cluster allocated *after it
exceeds*
the space in the MFT entry.
“After it exceedes” for sure.
Max
One would think so, but this doesn’t seem to be the case. It seems
additional file objects are created for compressed files, and compressed
data is also stored.
That’s the only explanation I could find for a typical encryption
filter not to work on compressed files.
Compression/decompression is done in low-level IO path of NTFS, below
Cc - so Cc always holds uncompressed data.
–
Kind regards, Dejan M. www.alfasp.com
E-mail: xxxxx@alfasp.com ICQ#: 56570367
Alfa File Monitor - File monitoring library for Win32 developers.
Alfa File Protector - File protection and hiding library for Win32
developers.
Hello All,
I wrote filter driver. I need known how i get right file access ?
Example, in User mode:
FILE *fl;
fl=fopen(“data.txt”,“rb”);
How i get in Kernel mode right “rb” ?
I get IRP_MJ_CREATE message. When saved right in it ?
–
Best regards,
Yury mailto:xxxxx@agtu.secna.ru