How to clear an information of the file from the $LogFile after file is
deleted? Or may be to force to update LFS restart area with a new checkpoint
record?
Reason: small files (< 1KB) written to the $LogFile with full file data…
It is my understanding that $LogFile is a circular strucuture. If the $LogFile is full it will
simple wrap to the start of the file. You do not need to worry about it. However i have been
working on getting the exact internal (on-disk) structure of $Log file but have not been
successful yet. If you know then please let me know.
Shabbir
— Eugene Lomovsky wrote: > Hello, All! > > How to clear an information of the file from the $LogFile after file is > deleted? Or may be to force to update LFS restart area with a new checkpoint > record? > > Reason: small files (< 1KB) written to the $LogFile with full file data… > > Eugene. > > > > — > Questions? First check the IFS FAQ at https://www.osronline.com/article.cfm?id=17 > > You are currently subscribed to ntfsd as: xxxxx@yahoo.com > To unsubscribe send a blank email to xxxxx@lists.osr.com
===== Shabbir Suterwala (408) 829-4825 (Cell)
__________________________________ Do you Yahoo!? Free Pop-Up Blocker - Get it now http://companion.yahoo.com/
----- Original Message -----
From: “Eugene Lomovsky” Newsgroups: ntfsd To: “Windows File Systems Devs Interest List” Sent: Monday, November 24, 2003 9:49 AM Subject: [ntfsd] $LogFile
> Hello, All! > > How to clear an information of the file from the $LogFile after file is > deleted? Or may be to force to update LFS restart area with a new checkpoint > record? > > Reason: small files (< 1KB) written to the $LogFile with full file data…
Kind of sounds like an attempt to provide security beyond the normal access control level: I doubt that just anyone can scavenge information from the log file.
If that’s the goal, you’ll need to cover a lot of other cases as well, especially including data left on the disk (in now-allocatable space, but that won’t help if it hasn’t yet been over-written) by file truncation, deletion, or defragmentation activity. IOW, you won’t depend upon the OS’s initialize-on-allocate facilities but will instead wipe space (multiple times, with different patterns, if you’re truly paranoid) whenever it’s deallocated.
Now, if you’re already doing all that (though it’s not immediately clear how you would be) and the only remaining problem is the possibility that the log file might contain sensitive data, the answer may be that NTFS simply isn’t set up to meet your requirements. This is a common potential drawback with any file system that doesn’t strictly limit itself to in-place updates (especially including log-structured storage, but the performance optimizations afforded by journaling open up similar holes in miniature): the performance overhead of multiply-wiping deallocated space is bad enough with an in-place update mechanism, but becomes ridiculous when data may migrate on every update - and approaches involving encryption or physically securing storage (behind dependable authentication/authorization OS facilities) become preferable.
Hi, Shabbir!
You wrote on Mon, 24 Nov 2003 10:37:29 -0800 (PST):
SS> It is my understanding that $LogFile is a circular strucuture. If
SS> the $LogFile is full it will simple wrap to the start of the file.
SS> You do not need to worry about it. However i have been working on
SS> getting the exact internal (on-disk) structure of $Log file but have
SS> not been successful yet. If you know then please let me know.
Hi, Bill!
You wrote on Mon, 24 Nov 2003 19:40:50 -0500:
> Hello, All!
> How to clear an information of the file from the $LogFile after file
>> is deleted? Or may be to force to update LFS restart area with a new
checkpoint
>> record?
> Reason: small files (< 1KB) written to the $LogFile with full file
>> data…
BT> Kind of sounds like an attempt to provide security beyond the normal
BT> access control level: I doubt that just anyone can scavenge
BT> information from the log file.
Exactly.
BT> If that’s the goal, you’ll need to cover a lot of other cases as
BT> well, especially including data left on the disk (in now-allocatable
BT> space, but that won’t help if it hasn’t yet been over-written) by
BT> file truncation, deletion, or defragmentation activity. IOW, you
BT> won’t depend upon the OS’s initialize-on-allocate facilities but
BT> will instead wipe space (multiple times, with different patterns, if
BT> you’re *truly* paranoid) whenever it’s deallocated.
Not me! It is our clients. %)
BT> Now, if you’re already doing all that (though it’s not immediately
BT> clear how you would be) and the only remaining problem is the
BT> possibility that the log file might contain sensitive data, the
BT> answer may be that NTFS simply isn’t set up to meet your
BT> requirements. This is a common potential drawback with any file
BT> system that doesn’t strictly limit itself to in-place updates
BT> (especially including log-*structured* storage, but the performance
BT> optimizations afforded by journaling open up similar holes in
BT> miniature):
BT> the performance overhead of multiply-wiping deallocated space is bad
BT> enough with an in-place update mechanism, but becomes ridiculous
BT> when data may migrate on *every* update - and approaches involving
BT> encryption or physically securing storage (behind dependable
BT> authentication/authorization
BT> OS facilities) become preferable.
“The log file service (LFS) is a series of kernel-mode routines inside the
NTFS driver that NTFS uses to access the log file. Although originally
designed to provide logging and recovery services for more than one client,
the LFS is used only by NTFS.”
Does it means that exist safe way (more or less) to use LFS?
> “The log file service (LFS) is a series of kernel-mode routines inside the > NTFS driver that NTFS uses to access the log file. Although originally > designed to provide logging and recovery services for more than one client, > the LFS is used only by NTFS.” > > Does it means that exist safe way (more or less) to use LFS?
Well, if you’ve written your own file system to take care of all the other possible exposures I mentioned, there might be (you just wouldn’t ever expose sensitive information in the material you logged, just pointers to it). But if you’ve gone that far, writing your own log manager isn’t all that much additional work (well, it does require some significant research to create a bullet-proof design, but the implementation isn’t hard) - and you’d have a fully-portable product (plus avoid any dependence on an internal interface that Microsoft might feel free to change without notice - though you wouldn’t be able to leverage the ability of a single system log - and in particular log ‘forces’ - to combine support for multiple file systems on the platform, if that’s of any significance to your client).
----- Original Message -----
From: “Eugene Lomovsky” Newsgroups: ntfsd To: “Windows File Systems Devs Interest List” Sent: Monday, November 24, 2003 5:49 PM Subject: [ntfsd] $LogFile
> Hello, All! > > How to clear an information of the file from the $LogFile after file is > deleted? Or may be to force to update LFS restart area with a new checkpoint > record? > > Reason: small files (< 1KB) written to the $LogFile with full file data… > > Eugene. > > > > — > Questions? First check the IFS FAQ at https://www.osronline.com/article.cfm?id=17 > > You are currently subscribed to ntfsd as: xxxxx@storagecraft.com > To unsubscribe send a blank email to xxxxx@lists.osr.com
> "The log file service (LFS) is a series of kernel-mode routines inside the
NTFS driver that NTFS uses to access the log file. Although originally
designed to provide logging and recovery services for more than one client,
the LFS is used only by NTFS."
Does it means that exist safe way (more or less) to use LFS?