An important first step in considering the security is to establish a
“threat model”. In other words, what do you consider to be the threats
against which you are trying to protect.
Start off with the fundamental premise that “there are no totally secure
systems”. Thus, we know that we can only establish “relatively secure”
systems. But relatively secure against what? Physical threat (which is
the point of securing the disk - it protects against the hard disk
being removed from the machine). Loss of the data device (e.g., the
laptop that “disappears” while going through the security checkpoint at
the airport because someone knows you have valuable information on it -
a real-world scenario)? Are you trying to protect against an
intelligence service of a foreign country with cryptoanalytic experts?
Gary’s point earlier (about traffic analysis) is a fascinating one -
often, for example, intelligence services monitor the RATE of traffic,
not just rely upon the CONTENT of the traffic. This is because decoding
content is expensive and time-consuming using even simplistic algorithms
for protection. But when we observe a sudden spike in communications,
for example, we can theorize that this indicates some imminent action -
even if we don’t know what it is (that is one source of heightened
terror alerts in the US, for example). The counter-measure to this (by
the way) is to establish a CONSTANT rate of communications - and inside
the (encrypted) data stream you indicate which pieces are bandwidth
filling fluff and which are not. Of course if your threat model is that
people are going to be watching bus-level traffic between your computer
system and your hard disk (and somehow you mysteriously don’t notice
them) then in fact encryption at the disk level probably is not going to
be sufficient.
But maybe your environment is where storage is on a NETWORK (SAN) in
which case encrypting the data on the computer probably works well,
whereas encrypting on the disk drive doesn’t (since anyone can observe
it on the SAN).
Another example: if you REALLY wand a good secure hard disk, use two.
On the first hard disk, collect random chatter (radioactive decay
products are good for this, for example, as is sampling white noise in
radio spectrum, observing quantum excitement, etc.) When that hard
disk is full, you now have your key. When you write data to the second
hard disk, read the corresponding data from the key, XOR it with the
data, and write it to the second hard disk; the best case here is one
where you never reuse the same block on the disk - think WORM media in
that instance.
When you go home at night, take the first hard disk with you. The
second hard disk, having absolutely no special intelligence,firmware or
software is immune from all known cryptanalytic attacks. BUT - you
can’t use the same key to encrypt a second hard disk, nor can you leave
your key lying about. Thus your security relies upon ensuring the
security of your key.
The harshest reality in security is that once you really DO lock down
the system, the biggest single threat remains the people using the
computer system. Want a REALLY secure system? Turn it off, encase it
in concrete. That is quite secure. No EM trace techniques, no network
attacks, no removable media, nothing. Not very useful, either.
So - before talking about what is secure, it would be best to describe
the class of attacks against which you are trying to attack. Generally,
your goal is going to be to make it SO expensive to break your security
that they won’t do it because the benefit is not worth the cost.
If you have TRULY sensitive information (like the NOC list for your
intelligence organization), I’d strongly encourage you to look at using
“write once” pads. I know someone around here who has been working in
the area of quantum cryptography as a means of generating one use pads
(see US Patent # 6,868,495 which covers their “One-time pad Encryption
key Distribution” method). FYI: the cryptography of one-use pads is well
established to be secure - it is the “gold standard” of data encryption
security. Whether or not the distribution technique described in the
above patent will withstand the scrutiny of the security community has
yet to be determine.
Regards,
Tony
Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Prokash Sinha
Sent: Monday, June 13, 2005 3:24 PM
To: ntdev redirect
Subject: Re: [ntdev] Re:disk encryption
First that is the way lot of implementation(s) are -.
-
When using password ? Use it for what ? There are thoughts that even
make
some transformation(s) on usr/psw and use that as encryption. So there
is
the bootstrap and what is that?. That is usr/psw. And where do we keep
these
?. How much is the exposure window from when an usr types ( assuming
no-one
in around and some trojan(s) playing its role ) to actually goes to the
firmware level and/or before it is already hashed ? — SURE THOSE ARE
THE
DETAIL(S)…
-
The best we could do is to apply one-way hash for encryption etc.
Where
prob(collison ) is almost zero. But in order to use that, somewhere
someone
has to use those usr/psw or some combination …
IN ESSENCE, WE ARE INDIRECTING, from one level to another. But in most
cases
something/somewhere has to bootstrap this whole implementation(s).
Nothing is really totally secured when a total implementation is under
analysis !!!
My point was that with firmware implementation some of the bits stored
for
different things are almost impossible to retrieve. Or at least not so
easily. On the otherhand, software based implementation(s) has an window
that is recurring, firmware based can avoid that.
Note one-way hash being not reversable, does not mean that there is a
probabliity that one can get to it ( collision effect ).
Yes in theory, security by obscurity is no security. In practice you
really
want to minimize the window ( in time as well as in space ).
-pro
----- Original Message -----
From: “Michal Vodicka”
To: “Windows System Software Devs Interest List”
Sent: Monday, June 13, 2005 11:47 AM
Subject: RE: [ntdev] Re:disk encryption
> ----------
> From:
xxxxx@lists.osr.com[SMTP:xxxxx@lists.osr.com
] on
behalf of Prokash Sinha[SMTP:xxxxx@garlic.com]
> Reply To: Windows System Software Devs Interest List
> Sent: Monday, June 13, 2005 8:36 PM
> To: Windows System Software Devs Interest List
> Subject: Re: [ntdev] Re:disk encryption
>
> Firmware based implementation try to hide the key in some places (
… )
> that only firmware has access of. Also user/psw. Detail should be
avoided
> here !!!
>
Nope. Proper implementation must not store encryption key anywhere. If
it
does, it is broken and hiding details is just security obscurity. That’s
why
I’m asking.
Encryption key can be derived some way from data only user knows, for
example from password. Then it is necessary to check if key is correct;
for
example decrypt some part of encrypted data, generate hash and compare
with
stored one. But there must be no way how to get key without user data.
If
firmware can do it, hacker can, too.
Best regards,
Michal Vodicka
UPEK, Inc.
[xxxxx@upek.com, http://www.upek.com]
—
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
You are currently subscribed to ntdev as: unknown lmsubst tag argument:
‘’
To unsubscribe send a blank email to xxxxx@lists.osr.com
—
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
You are currently subscribed to ntdev as: xxxxx@osr.com
To unsubscribe send a blank email to xxxxx@lists.osr.com