“Joseph M. Newcomer” wrote in message news:xxxxx@windbg…
Largely, the answer to the question about pieceing together the file is Not
A Chance. You might find random text left in memory, but the relationship
of that random text to the disk has been obliterated.
I’ll preface this by saying I have no idea what raj is up to or what his
requirements are (though I suspect that they’re interesting/insane :)), but
wanted to make a general comment here.
Sometimes just finding pieces of data can be enough. A text file is really
just a simple example of a memory mapped file because most people read their
text files with Notepad (which happens to use memory mapping). Imagine
though if you wanted to see what executables were mapped in the imaged
system, seeing the bits of these that are in memory might tell you a whole
lot about the state of that system. Also, in this case you don’t really care
about the data on disk as what you really want to know is what was executing
in memory.
But in general, you can assume that memory is random garbage and if you
should be
so fortunate as to recognize ANYTHING in the memory dump, you treat that as
a low-probability event.
If you’re just scanning the raw contents of the dump for patterns then,
sure, who knows what that junk is. However, if you follow the O/S data
structures and happen upon the contents then you can have a high level of
confidence that what you are looking at is what was being executed on the
system.
Per usual, the answer to the question really relies upon what the intention
is.
-scott
–
Scott Noone
Consulting Associate and Chief System Problem Analyst
OSR Open Systems Resources, Inc.
http://www.osronline.com
It seems things have not changed much. Back in the 1960s, I was having to
locating missing records in ISAM databases by reading the hexadecimal dumps
of OS/360. The idea was that if we knew what records were left behind, we
might have some idea how the users were triggering errors like access
faults.
And in the 1970s, one of our support people was an expert at patching disks
back together after a crash garbaged the root directory (we didn’t have the
notion of transacted files or directories in those days).
In the late 1970s, MS-DOS trashed my hard drive hours before I was scheduled
to go to a client site, and although I did daily backups, I was about to
lose a whole day of intensive editing. I pieced together the files from the
traces left on the disk. Not one of my more fun evenings.
Largely, the answer to the question about pieceing together the file is Not
A Chance. You might find random text left in memory, but the relationship
of that random text to the disk has been obliterated. You could find pages
that have already been committed to disk but not reused, pieces that have
not yet been committed, etc., and note that the “file” on the disk might
have reverted to the pre-modification copy rather than the modified copy
because the “transaction” that updated the directory to reflect the new file
has been aborted due to the reboot.
Also, you have no idea what “generation” the random data is: is it from the
most recent reboot, or six reboots back? At what point are you seeing it
(after the reboot completes or by getting a boot-time driver to run early on
in the boot process, which is fairly late as far as preserving memory
contents goes). This doesn’t even begin to address questions like if the
file was compressed or encrypted (or both) and what you are seeing is
plaintext (hint: this is actually a security hole, which many hyperaware
security experts already know about).
Bottom line: if you care about the file contents, deal with it in some other
way. For example, turn off file buffering for that file, and write it out
in 512-byte chunks aligned on 512-byte boundaries, and open the file right
before starting the write and close it immediately after the write. These
make logging a bit more robust, with a serious performance penalty. But in
general, you can assume that memory is random garbage and if you should be
so fortunate as to recognize ANYTHING in the memory dump, you treat that as
a low-probability event.
One of the exercises worth thinking about is how to implement a robust
transacted file system in an environment which does not already have a
robust transacted file system (these ideas date back to the 1970s, when I
first encountered them). Hoping to re-create anything from whatever might
be left in RAM after a reboot is essentially a pointless exercise.
joe
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Tim Roberts
Sent: Wednesday, August 03, 2011 12:39 PM
To: Kernel Debugging Interest List
Subject: Re: [windbg] Re:dumping contents of .txt file
raj wrote:
So we Now find the contents of some obscure file still lingering in memory
after several reboots ?
can we piece together the whole file ??
i see this dumped file lingering in memory ?? hyperspacce?? matrix
It’s not impossible. As long as the RAM has not lost power, the
previous contents will be retained.
so how i can go about piecing together the whole file
You can’t, without scanning the dump by hand and looking for related
text strings. In a 4GB dump file, that’s going to be a hell of a job.
Dumping with !dc Address L Some length doesnt seem to fetch continous
pages
after one section ie pe header
Of course not. !dc dumps physical pages. After your system has run for
a few minutes, it is extremely rare to find two consecutive virtual
pages that reside on consecutive physical pages.
so how to follow the trail ??
There is no “trail”. There’s just a few bytes of memory here and there.
–
Tim Roberts, xxxxx@probo.com
Providenza & Boekelheide, Inc.
WINDBG is sponsored by OSR
For our schedule of WDF, WDM, debugging and other seminars visit:
http://www.osr.com/seminars
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer