Is it possible to make NTFS reuse deleted file disk space first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu

You need to either 1) write over the file before deleting it (assuming it’s not a sparse or compressed file), or 2) allocate all free space on the volume and overwrite it afterwards. For sparse or compressed files, the second will be the only reliable option.

? S

-----Original Message-----
From: Shangwu
Sent: Thursday, February 05, 2009 08:40
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Is it possible to make NTFS reuse deleted file disk space first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

How about encrypted? If you write zeroes from the user’s context so that
they are encrypted, it might enable analysis of the key with a large block
of encrypted zeroes.

“Skywing” wrote in message news:xxxxx@ntfsd…
You need to either 1) write over the file before deleting it (assuming it’s
not a sparse or compressed file), or 2) allocate all free space on the
volume and overwrite it afterwards. For sparse or compressed files, the
second will be the only reliable option.

– S

-----Original Message-----
From: Shangwu
Sent: Thursday, February 05, 2009 08:40
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Is it possible to make NTFS reuse deleted file disk space
first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Secure disk wipe applications typically don’t just write a run of zeroes and leave things be. Most typically have random data as the final overwrite pass.

  • S

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com] On Behalf Of David Craig
Sent: Thursday, February 05, 2009 11:37 AM
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] Is it possible to make NTFS reuse deleted file disk space first?

How about encrypted? If you write zeroes from the user’s context so that
they are encrypted, it might enable analysis of the key with a large block
of encrypted zeroes.

“Skywing” wrote in message news:xxxxx@ntfsd…
You need to either 1) write over the file before deleting it (assuming it’s
not a sparse or compressed file), or 2) allocate all free space on the
volume and overwrite it afterwards. For sparse or compressed files, the
second will be the only reliable option.

- S

-----Original Message-----
From: Shangwu
Sent: Thursday, February 05, 2009 08:40
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Is it possible to make NTFS reuse deleted file disk space
first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

I had thought most used fixed alternating patterns in some of the passes,
random in others, but zeroes in the final pass to allow disk image programs
to minimize storage requirements. Truly random characters in the last pass
would solve the key security problem if they do so.

“Skywing” wrote in message news:xxxxx@ntfsd…
Secure disk wipe applications typically don’t just write a run of zeroes and
leave things be. Most typically have random data as the final overwrite
pass.

- S

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of David Craig
Sent: Thursday, February 05, 2009 11:37 AM
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] Is it possible to make NTFS reuse deleted file disk
space first?

How about encrypted? If you write zeroes from the user’s context so that
they are encrypted, it might enable analysis of the key with a large block
of encrypted zeroes.

“Skywing” wrote in message news:xxxxx@ntfsd…
You need to either 1) write over the file before deleting it (assuming it’s
not a sparse or compressed file), or 2) allocate all free space on the
volume and overwrite it afterwards. For sparse or compressed files, the
second will be the only reliable option.

- S

-----Original Message-----
From: Shangwu
Sent: Thursday, February 05, 2009 08:40
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Is it possible to make NTFS reuse deleted file disk space
first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Thank you for the input.

My goal is to keep the disk image (from the first sector to the last used
sector) as small as possible.
Can we influence or enforce NTFS to reuse the first run in its free space
list and split the written data to separate runs, instead of finding a large
run in the end of free space list?

Shangwu

“Skywing” wrote in message news:xxxxx@ntfsd…
You need to either 1) write over the file before deleting it (assuming it’s
not a sparse or compressed file), or 2) allocate all free space on the
volume and overwrite it afterwards. For sparse or compressed files, the
second will be the only reliable option.

– S

-----Original Message-----
From: Shangwu
Sent: Thursday, February 05, 2009 08:40
To: Windows File Systems Devs Interest List
Subject: [ntfsd] Is it possible to make NTFS reuse deleted file disk space
first?

When a file is deleted, NTFS marks the disk clusters of the file as
reusable. But when a new file is created and its data is written to the
disk, NTFS writes the data to new areas. Is it possible to enforce NTFS to
use the released cluster first and how?

Thanks for any input.

Shangwu


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

Generally, you use an initialization vector to “skew” subsequent
encryptions of blocks in order to add additional entropy into the
system; this avoids multiple blocks of zeros encrypted using the same
key from generating the same output data.

That additional information can be surprisingly simple - for example, I
often see people use the offset into the object being encrypted for the
initialization vector.

Tony
OSR

Secure wipe utilities do not function well in the presence of
write-leveling disk drives, such as SSD drives or USB thumb drives,
because such drives do not behave the way that “traditional” hard disk
drives behave. It’s a common problem I see in such utilities and while
I’ve often been dismissed in the past (“nobody uses SSD drives”) that
position is increasingly difficult to maintain.

Tony
OSR

Yes, this is a common technique and has some name I can’t remember this late
a night. The main problem with using an offset into a file to act as that
value is that very few bits are set and are predictable. If that offset is
hashed that can help, but doing this correctly is for someone who
understands encryption much better than I do. I just understand enough to
know I can implement anything that someone who knows the techniques can
design, but I sure don’t want to try to do the design myself. If you are
encrypting a file that is always accessed sequentially such as an email you
can use the results of each block to ‘seed’ the next block, but it will
prevent random access.

Ramble mode on:
I remember the old technique of using zip to create a zip file not doing
compression on the zip but only as a single file to contain all the files.
Then that file is passed into zip again to create a new zip that is
compressed. Compression is greatly enhanced compared to where each file is
compressed separately. This is another place where ‘random access’ is
inhibited just as with the encryption technique above.
Ramble mode off:

“Tony Mason” wrote in message news:xxxxx@ntfsd…
Generally, you use an initialization vector to “skew” subsequent
encryptions of blocks in order to add additional entropy into the
system; this avoids multiple blocks of zeros encrypted using the same
key from generating the same output data.

That additional information can be surprisingly simple - for example, I
often see people use the offset into the object being encrypted for the
initialization vector.

Tony
OSR

Having done some work with flash memory, the most important information
required for a ‘full drive wipe’ would be knowing how many sectors are
really in the drive. Some of the early flash devices had more blocks than
were exposed via the filesystem. When each write occurred it would be
placed in the oldest free block before the other block was erased and placed
onto the free list. If each pass was designed to write those number of
blocks you could obtain full coverage of the media. I also wonder what the
correct erase requirements of flash memory might be. With magnetic media
based upon iron oxide (maybe other forms of rust too), it is fairly well
known how to erase it correctly, but I have not seen any discussion about
flash memory and how to erase it properly. Another consideration is bad
blocks and early wear on the device if done frequently. I think in some way
I agree with the DOD about just destroying the drives when security is a
major consideration.

“Tony Mason” wrote in message news:xxxxx@ntfsd…
Secure wipe utilities do not function well in the presence of
write-leveling disk drives, such as SSD drives or USB thumb drives,
because such drives do not behave the way that “traditional” hard disk
drives behave. It’s a common problem I see in such utilities and while
I’ve often been dismissed in the past (“nobody uses SSD drives”) that
position is increasingly difficult to maintain.

Tony
OSR

Shangwu wrote:

Thank you for the input.

My goal is to keep the disk image (from the first sector to the last used
sector) as small as possible.
Can we influence or enforce NTFS to reuse the first run in its free space
list and split the written data to separate runs, instead of finding a large
run in the end of free space list?

There is no external way to bias NTFS allocations, with the single
exception of the “volsnap handshake” in Vista SP1.

NTFS does tend to allocate space that has been recently deleted
preferentially. This is really an implementation artifact, and was not
intended.

But it sounds to me like this problem is one encountered by “disk in a
file” drivers who misrepresent the volume size to the filesystem. If a
filesystem is told it has 60Gb to play with (when backed by a 5Gb file),
it will play with blocks within that 60Gb. If it was told it had only
5Gb to play with, its allocations would be confined to a smaller space.
This means, for example, that it will fragment allocations to fit
within the smaller space rather than using contiguous allocations and
forcing the backing file to be extended.

A different approach to the problem would be to start with a small
volume size and call FSCTL_EXTEND_VOLUME on demand to ensure the
filesystem is operating on a volume size which is not hugely dissimilar
to the size of its backing file.

  • M


This posting is provided “AS IS” with no warranties, and confers no rights

I seem to recall that there was some interesting public documentation on how BitLocker addressed this; might be worth looking around for.

? S

-----Original Message-----
From: David Craig
Sent: Friday, February 06, 2009 00:11
To: Windows File Systems Devs Interest List
Subject: Re:[ntfsd] Is it possible to make NTFS reuse deleted file disk space first?

Yes, this is a common technique and has some name I can’t remember this late
a night. The main problem with using an offset into a file to act as that
value is that very few bits are set and are predictable. If that offset is
hashed that can help, but doing this correctly is for someone who
understands encryption much better than I do. I just understand enough to
know I can implement anything that someone who knows the techniques can
design, but I sure don’t want to try to do the design myself. If you are
encrypting a file that is always accessed sequentially such as an email you
can use the results of each block to ‘seed’ the next block, but it will
prevent random access.

Ramble mode on:
I remember the old technique of using zip to create a zip file not doing
compression on the zip but only as a single file to contain all the files.
Then that file is passed into zip again to create a new zip that is
compressed. Compression is greatly enhanced compared to where each file is
compressed separately. This is another place where ‘random access’ is
inhibited just as with the encryption technique above.
Ramble mode off:

“Tony Mason” wrote in message news:xxxxx@ntfsd…
Generally, you use an initialization vector to “skew” subsequent
encryptions of blocks in order to add additional entropy into the
system; this avoids multiple blocks of zeros encrypted using the same
key from generating the same output data.

That additional information can be surprisingly simple - for example, I
often see people use the offset into the object being encrypted for the
initialization vector.

Tony
OSR


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars
(including our new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer

Thanks Malcolm for your helpful posting.

Shangwu

“Malcolm Smith” wrote in message news:xxxxx@ntfsd…
> Shangwu wrote:
>> Thank you for the input.
>>
>> My goal is to keep the disk image (from the first sector to the last used
>> sector) as small as possible.
>> Can we influence or enforce NTFS to reuse the first run in its free space
>> list and split the written data to separate runs, instead of finding a
>> large run in the end of free space list?
>
> There is no external way to bias NTFS allocations, with the single
> exception of the “volsnap handshake” in Vista SP1.
>
> NTFS does tend to allocate space that has been recently deleted
> preferentially. This is really an implementation artifact, and was not
> intended.
>
> But it sounds to me like this problem is one encountered by “disk in a
> file” drivers who misrepresent the volume size to the filesystem. If a
> filesystem is told it has 60Gb to play with (when backed by a 5Gb file),
> it will play with blocks within that 60Gb. If it was told it had only 5Gb
> to play with, its allocations would be confined to a smaller space. This
> means, for example, that it will fragment allocations to fit within the
> smaller space rather than using contiguous allocations and forcing the
> backing file to be extended.
>
> A different approach to the problem would be to start with a small volume
> size and call FSCTL_EXTEND_VOLUME on demand to ensure the filesystem is
> operating on a volume size which is not hugely dissimilar to the size of
> its backing file.
>
> - M
>
> –
> This posting is provided “AS IS” with no warranties, and confers no rights
>

I’m not an expert in the encryption space, either, but I’ve done a fair
amount of reading because I need to understand enough to have
intelligent conversations with customers. From what I can tell based
upon those readings, a fairly predictable IV doesn’t matter to the
security of AES, since all you are doing is changing the initial
condition and that just makes the AES dials spin differently even for
the same input data set; the IV does not increase the efficiency of AES,
it merely eliminates a certain class of analysis. But then again, if I
get a number theory guy that suddenly comes back and says this weakens
things in some way, there are plenty of other options (and the CBC chain
you describe is, again, from my readings, not considered to be any more
secure, although it makes working with big files very complicated.)

I’m a strong proponent in this space of not inventing your own tech.
I’ve been working with Kerberos for 20 years now, and I point to it as
an excellent example of “this stuff is very hard to get right, don’t do
it yourself and repeat 20 years of mistakes.” I start reading about
utilizing the power consumption information of the CPU to provide
analysis of AES and my own brain overheats just thinking about it (who
knew that Intel had added AES specific SIMD instructions in some of
their CPUs? I hadn’t realized it, but I helps deal with the power
analysis issue.) Earlier today I read a comment that AES is considered
strong enough for sensitive but not classified information, suggesting
there are stronger ciphers for the classified stuff. Now I see there
are IEEE (draft) variants for the block encoding using AES that further
extend it specifically for storage media. Clearly, there are people
that are worrying about this stuff. Of course, none of this does any
good if you just plug your iPod in and copy the data to it - the
uber-crypto someone used on the hard disk is pretty much useless if it
isn’t used on the iPod (or SD card - anyone else notice that the new SD
XC supports 2TB of storage? And they used EXFAT as the file system!)

My goal has been to simply implement an infrastructure in which I can
allow others to add file level encryption easily. I’ve never seen
anyone on here saying “I’m trying to build a simple encryption filter
that allows for expansion of each data block by an arbitrary amount to
account for the specialized mil-spec encryption algorithms needed by my
customers.” But I’ve been sitting around thinking about this problem
and how to make it easy to implement inside Windows for 10+ years now.
I’ve yet to see any of our customers hit this level of sophistication.
That probably just means there are still prospective customers out there
to whom I haven’t spoken yet (which is a good thing, since that’s what
it takes to keep the doors open here at OSR.)

Let’s face it - encrypting troop deployment files with AES with a weak
IV generation mechanism when you stick it on your iPod is still better
than not encrypting it at all. Users are ultimately the weakest part of
the security chain. After all, at some stage it becomes easier to
kidnap your parents/partner/kids and threaten to kill them if you don’t
give us your dongle and password than it is to try and build a
multi-billion dollar computer to brute force break your crypto…

It’d be a great business if it weren’t for the users.

Tony
OSR

> Let’s face it - encrypting troop deployment files with AES with a weak
> IV generation mechanism when you stick it on your iPod is still better
> than not encrypting it at all. Users are ultimately the weakest part
of the security chain. After all, at some stage it becomes easier to
> kidnap your parents/partner/kids and threaten to kill them if you >
>don’t give us your dongle and password than it is to try and build a
> multi-billion dollar computer to brute force break your crypto…

+1

It’s even easier to pay them, and just asking them for their password
works a lot of the time as well.

mm

Tony Mason wrote:

I’m not an expert in the encryption space, either, but I’ve done a fair
amount of reading because I need to understand enough to have
intelligent conversations with customers. From what I can tell based
upon those readings, a fairly predictable IV doesn’t matter to the
security of AES, since all you are doing is changing the initial
condition and that just makes the AES dials spin differently even for
the same input data set; the IV does not increase the efficiency of AES,
it merely eliminates a certain class of analysis. But then again, if I
get a number theory guy that suddenly comes back and says this weakens
things in some way, there are plenty of other options (and the CBC chain
you describe is, again, from my readings, not considered to be any more
secure, although it makes working with big files very complicated.)

I’m a strong proponent in this space of not inventing your own tech.
I’ve been working with Kerberos for 20 years now, and I point to it as
an excellent example of “this stuff is very hard to get right, don’t do
it yourself and repeat 20 years of mistakes.” I start reading about
utilizing the power consumption information of the CPU to provide
analysis of AES and my own brain overheats just thinking about it (who
knew that Intel had added AES specific SIMD instructions in some of
their CPUs? I hadn’t realized it, but I helps deal with the power
analysis issue.) Earlier today I read a comment that AES is considered
strong enough for sensitive but not classified information, suggesting
there are stronger ciphers for the classified stuff. Now I see there
are IEEE (draft) variants for the block encoding using AES that further
extend it specifically for storage media. Clearly, there are people
that are worrying about this stuff. Of course, none of this does any
good if you just plug your iPod in and copy the data to it - the
uber-crypto someone used on the hard disk is pretty much useless if it
isn’t used on the iPod (or SD card - anyone else notice that the new SD
XC supports 2TB of storage? And they used EXFAT as the file system!)

My goal has been to simply implement an infrastructure in which I can
allow others to add file level encryption easily. I’ve never seen
anyone on here saying “I’m trying to build a simple encryption filter
that allows for expansion of each data block by an arbitrary amount to
account for the specialized mil-spec encryption algorithms needed by my
customers.” But I’ve been sitting around thinking about this problem
and how to make it easy to implement inside Windows for 10+ years now.
I’ve yet to see any of our customers hit this level of sophistication.
That probably just means there are still prospective customers out there
to whom I haven’t spoken yet (which is a good thing, since that’s what
it takes to keep the doors open here at OSR.)

Let’s face it - encrypting troop deployment files with AES with a weak
IV generation mechanism when you stick it on your iPod is still better
than not encrypting it at all. Users are ultimately the weakest part of
the security chain. After all, at some stage it becomes easier to
kidnap your parents/partner/kids and threaten to kill them if you don’t
give us your dongle and password than it is to try and build a
multi-billion dollar computer to brute force break your crypto…

It’d be a great business if it weren’t for the users.

Tony
OSR

From my days in the government, I know that NSA (no such agency) controls
encryption used by the military and other agencies very closely. They do
not describe the implementation of their algorithms to those who use it or
even to the repair folks. Much of it is done in hardware available only
from a few (one?) vendors where it must be available for more general use
than the stuff delivered from the NSA. No one will talk about it who knows
anything about it and anyone needing access to that stuff will have to
undergo a real intensive investigation and may even require frequent
polygraph testing. Good luck in getting that level of access as it very
limited and if the NSA still has some of the rules I heard about, it
requires use of their hardware. I heard that hardware has self-destruction
technologies in the sealed circuits, but my knowledge is based upon
experience from quite a while ago.

“Tony Mason” wrote in message news:xxxxx@ntfsd…
I’m not an expert in the encryption space, either, but I’ve done a fair
amount of reading because I need to understand enough to have
intelligent conversations with customers. From what I can tell based
upon those readings, a fairly predictable IV doesn’t matter to the
security of AES, since all you are doing is changing the initial
condition and that just makes the AES dials spin differently even for
the same input data set; the IV does not increase the efficiency of AES,
it merely eliminates a certain class of analysis. But then again, if I
get a number theory guy that suddenly comes back and says this weakens
things in some way, there are plenty of other options (and the CBC chain
you describe is, again, from my readings, not considered to be any more
secure, although it makes working with big files very complicated.)

I’m a strong proponent in this space of not inventing your own tech.
I’ve been working with Kerberos for 20 years now, and I point to it as
an excellent example of “this stuff is very hard to get right, don’t do
it yourself and repeat 20 years of mistakes.” I start reading about
utilizing the power consumption information of the CPU to provide
analysis of AES and my own brain overheats just thinking about it (who
knew that Intel had added AES specific SIMD instructions in some of
their CPUs? I hadn’t realized it, but I helps deal with the power
analysis issue.) Earlier today I read a comment that AES is considered
strong enough for sensitive but not classified information, suggesting
there are stronger ciphers for the classified stuff. Now I see there
are IEEE (draft) variants for the block encoding using AES that further
extend it specifically for storage media. Clearly, there are people
that are worrying about this stuff. Of course, none of this does any
good if you just plug your iPod in and copy the data to it - the
uber-crypto someone used on the hard disk is pretty much useless if it
isn’t used on the iPod (or SD card - anyone else notice that the new SD
XC supports 2TB of storage? And they used EXFAT as the file system!)

My goal has been to simply implement an infrastructure in which I can
allow others to add file level encryption easily. I’ve never seen
anyone on here saying “I’m trying to build a simple encryption filter
that allows for expansion of each data block by an arbitrary amount to
account for the specialized mil-spec encryption algorithms needed by my
customers.” But I’ve been sitting around thinking about this problem
and how to make it easy to implement inside Windows for 10+ years now.
I’ve yet to see any of our customers hit this level of sophistication.
That probably just means there are still prospective customers out there
to whom I haven’t spoken yet (which is a good thing, since that’s what
it takes to keep the doors open here at OSR.)

Let’s face it - encrypting troop deployment files with AES with a weak
IV generation mechanism when you stick it on your iPod is still better
than not encrypting it at all. Users are ultimately the weakest part of
the security chain. After all, at some stage it becomes easier to
kidnap your parents/partner/kids and threaten to kill them if you don’t
give us your dongle and password than it is to try and build a
multi-billion dollar computer to brute force break your crypto…

It’d be a great business if it weren’t for the users.

Tony
OSR

Yeah? AES is a Type1 encryption algorithm, which would indicate that it is indeed acceptable for use with classified documents. It can be used to encrypt Top Secret documents, as long as 256-bit keys are used (and other safeguards are met).

http://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml

I’m not saying there aren’t “stronger” ciphers for classified documents. Rather, I’m suggesting that AES has been certified as being acceptable when appropriate equipment and keys have been used.

Peter
OSR

> I’m not saying there aren’t “stronger” ciphers for classified documents.

There was a rumour that Serpent is stronger, but AES is rather fast and very simple (software implementation is simpler then DES’s).


Maxim S. Shatskih
Windows DDK MVP
xxxxx@storagecraft.com
http://www.storagecraft.com

Also using 256-bit AES for highly classified files (via CESG)

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of xxxxx@osr.com
Sent: 09 February 2009 21:09
To: Windows File Systems Devs Interest List
Subject: RE:[ntfsd] Is it possible to make NTFS reuse deleted file disk
space first?

*** WARNING ***

This mail has originated outside your organization, either from an
external partner or the Global Internet.
Keep this in mind if you answer this message.

Yeah? AES is a Type1 encryption algorithm, which would indicate that it
is indeed acceptable for use with classified documents. It can be used
to encrypt Top Secret documents, as long as 256-bit keys are used (and
other safeguards are met).

http://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml

I’m not saying there aren’t “stronger” ciphers for classified documents.
Rather, I’m suggesting that AES has been certified as being acceptable
when appropriate equipment and keys have been used.

Peter
OSR


NTFSD is sponsored by OSR

For our schedule of debugging and file system seminars (including our
new fs mini-filter seminar) visit:
http://www.osr.com/seminars

To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer

********************************************************************
This email and any attachments are confidential to the intended
recipient and may also be privileged. If you are not the intended
recipient please delete it from your system and notify the sender.
You should not copy it or use it for any purpose nor disclose or
distribute its contents to any other person.
********************************************************************