Suppose we have a volume with a very fragmented free space. We start writing a file on this volume. First, it’s possibly resident, later it becomes nonresident, described by group runs, more later group runs don’t fit into one MFT record and attribute list attribute is created, later attribute list doesn’t fit into one MFT record and made nonresident… well, one day group runs describing attribute list don’t fit into MFT record.
I have a question: what do drivers in final builds of Windows 2000/Windows XP do in this case?
Much thanks in advance.
Perepechko Andrew.
I seem to recall NTFS will use multiple MFT records for one file in this scenario. I don’t remember where I read this. Perhaps it was Inside the Windows NT File System by Helen Custer. This was later incorporated into subsequent editions of Inside Windows NT.
-----Original Message-----
From: xxxxx@yandex.ru [mailto:xxxxx@yandex.ru]
Sent: Tuesday, August 27, 2002 6:22 AM
To: File Systems Developers
Subject: [ntfsd] Fragmented files on NTFS
Suppose we have a volume with a very fragmented free space. We start writing a file on this volume. First, it’s possibly resident, later it becomes nonresident, described by group runs, more later group runs don’t fit into one MFT record and attribute list attribute is created, later attribute list doesn’t fit into one MFT record and made nonresident… well, one day group runs describing attribute list don’t fit into MFT record.
I have a question: what do drivers in final builds of Windows 2000/Windows XP do in this case?
Much thanks in advance.
Perepechko Andrew.
You are currently subscribed to ntfsd as: xxxxx@inin.com
To unsubscribe send a blank email to %%email.unsub%%
>I seem to recall NTFS will use multiple MFT records for one file in this scenario. I don’t remember where I read this. Perhaps it was Inside the Windows NT File System by Helen Custer. This was later incorporated into subsequent editions of Inside Windows NT.
Hello! Thank you for answering me. I found the following in the “Inside Windows 2000” book:
[…In this case, an attribute called the attribute list is added. The attribute list attribute contains the name and type code of each of the file’s attributes and the file reference of the MFT record where the attribute is located. The attribute list attribute is provided for those cases in which a file grows so large or so fragmented that a single MFT record can’t contain the multitude of VCN-to-LCN mappings needed to find all its runs. Files with more than 200 runs typically require an attribute list.]
But still there is a problem - attribute list, either resident or not, must describe ALL the records, that contain parts of the attributes of the (should i say so?) base MFT record, but what can be done to describe a very large and fragmented attribute list (that its group runs do not fit in the base record)?
I assume there’re some possible cases:
- NTFS driver doesn’t care handling such things and signals that not enough space on the volume is left (so should do my filter).
- NTFS driver uses some defragmenting technique (so my filter should do nothing).
- NTFS driver splits attribute list and points from the part that belongs to the base record to the part that belongs to the child record, so attribute list describes itself - i find this very strange but still working.
Thank You.
Perepechko Andrew.
> Suppose we have a volume with a very fragmented free space. We start
writing a file on this
volume. First, it’s possibly resident, later it becomes nonresident,
described by group runs,
more later group runs don’t fit into one MFT record and attribute list
attribute is created, later
attribute list doesn’t fit into one MFT record and made nonresident…
well, one day group runs
describing attribute list don’t fit into MFT record.
I have a question: what do drivers in final builds of Windows
2000/Windows XP do in this
case?
I expect some NTFS routine to raise an exception, which will then be
caught in NtfsCommonWrite or such and will cause the write to fail.
BTW - can you count the probability of such an event? I think that a
crayfish will whistle much sooner 
For instance, on usual w2k SystemRoot volume, the only files which use
attribute lists are SystemRoot\SYSTEM32 and SystemRoot\INF
directories.
Max
Thank you for your answer, Max. Well, i do assume it’s hardly possible, still i made some calculations:
Let’s consider a very extreme case whether every even (or odd, whatever your like) cluster is occupied, BytesPerCluster = 512. Every MFT record holds no more than 1024/3=341 attribute list group runs, so they describe no more than 341*512=174592 (bytes) of an attribute list, so attribute list holds no more than 174592/56=3117 MFT record references, those MFT records hold no more than 3117*(1024/3)=1062897
group runs of data attribute. The latter limits data attribute by 1062897*512=544203264 (bytes), that is around 519 Mb.
So, theoretically, problem is able to occur with a just 519*2=1038 (Mb) volume.
I was just wondering of a such,i would say, rather strange file system design, whilst most of other modern file systems based on b-tree - extent model do not have such a file size restriction with enough free space available.
Andrew.
I expect some NTFS routine to raise an exception, which will then be
caught in NtfsCommonWrite or such and will cause the write to fail.
BTW - can you count the probability of such an event? I think that a
crayfish will whistle much sooner 
For instance, on usual w2k SystemRoot volume, the only files which use
attribute lists are SystemRoot\SYSTEM32 and SystemRoot\INF
directories.
Max