Security vs. obscurity (Was: Re: Regmon(a new puzzle))

> ----------

From: xxxxx@dchbk.us[SMTP:xxxxx@dchbk.us]
Reply To: xxxxx@lists.osr.com
Sent: Friday, August 08, 2003 10:57 PM
To: xxxxx@lists.osr.com
Subject: [ntdev] Re: Regmon(a new puzzle)

Responsible companies don’t try to change fundamental design decisions of
the Windows security model, and don’t try to make money by claiming to do
the impossible.

There’s only about 20 years of literature on this topic. The Orange Book
being a noteworthy example.

A system is secure against an attack when (give or take bugs) it is
impossible to perform the attack. Not hard. Not ‘impossible except on
Wednesdays.’ In some cases, there’s value by making an attack merely
extremely difficult. There’s no value in taking an attack that is already
difficult and making it slightly more difficult.

Sure but we aren’t living in black&write world. OS bugs play very important
role. This week a worm utilizing RPC buffer overflow was able to run code
with local system privileges and installed itself using Run registry key. If
there is a driver protecting this key as Antony wants, it would stop worm
from spreading. Now the question is if it makes sense to write such a
software. From security standpoint the answer is clear: no. Code running
with local system privileges can do virtually anything and can be always
targeted against such a protection and the overall security isn’t improved
at all. On the other hand, the same is true for any existing antivirus. For
privileged code there is always a way how to avoid them. Does it mean
running antivirus or any kind of local protection doesn’t make sense? I
don’t think so. Most of worms, trojans and viruses are written by clueless
script kiddies who modify already existing virus code or exploit. System
protected this way is always vulnerable to targeted sophisticated attack but
can stop most of existing fauna and new mutations. Yes, it can make false
sense of security and probability of targeted attack raises with number of
user of a product. It means, more products, lesser chance of attack. It is
similar with browsers and mailers: most users use IE and Outlook / OE and
most HTML exploits are targeted against this software. With Mozilla there is
much better chance user won’t be infected. Not because Mozilla is better but
because it isn’t so widely used. So if there is a lot of different
“security” software which don’t improve real security and only makes attacks
harder, it can be an improvement for whole 'Net. MS was successful to create
Windows monoculture and monocultures are susceptible to epidemies. Products
of this type can add necessary differences. Developers and users just have
to understand limitations and don’t speak about security when only make
things more obscure.

The other problem is when an inexperienced developer tries to write
something like this and his driver makes system unstable. Then all possible
positive effects are destroyed.

Best regards,

Michal Vodicka
STMicroelectronics Design and Application s.r.o.
[michal.vodicka@st.com, http:://www.st.com]

I can’t resist myself. Reason is no reason! Just late friday evening, being
worn out like hell!!!.
Sorry to type I so many times but I will have to do it.

Couple things first ----

Just go visit a big company, where there are possibly hundereds of thousands
of windows users, look at their installation, and see how many are using
antivirus, most of them are trying to protect themselves, why and what is
the reason? Are they all stupid?, wasting their money?. Then there are
personal firewall installed, sometime as an office suite. I could easily
name a few company or org where such a large install base exist.

There are companies, I happen to know, that they get called from even
Pentagon, when there is a new virus that brought down several of their
network(s). Yes they are also crazy, and those companies are
irresponsible…

Back in the history, when DOS was prevalent, tcp/ip, games, screen saver,
ram disk, all of them used TSRs, what a dangerous thing, totally against
principle of software engineering, but was done.

No one really force anyone to buy or install this or that, but I would
request some of the people, who are so much against the benifit of this
security software(s), just uninstall those software, get your system online,
and there might be a lot of website to offer a challenge, then you will see
what is the value of those software(s). Better yet open and ISP shop without
firewall/antivirus, run any OSes (Linux, Windows server) etc, and watch the
hell comes Ur door. But if that is not some exiting endavor, just down load
the eathreal, and winpcap open source, grab couple good books, set aside
about a year’s worth of time, understand the tcp/ip, watch for nonsenses
trying to get under Ur skin. Then one might understand the might of these
software.

To me obscurity is the first principle of security. Everyone does it,
willingly or unwillingly, and we all know that, just sometime hesitate to
bend our ego down to it.

A friend of mine, and coworker has told me once, until one gets to the
laundry room, everything is very clean. And he mentioned about systems
software(s) with more than 30 yrs of solid programming experience. And quite
frankly I have been to too many laundry rooms too.

BTW, none of those companies are paying me for this. I just happened to work
for them in the past.

-prokash

----- Original Message -----
From: “Michal Vodicka”
To: “Windows System Software Developers Interest List”
Sent: Friday, August 15, 2003 8:07 PM
Subject: [ntdev] Security vs. obscurity (Was: Re: Regmon(a new puzzle))

> > ----------
> > From: xxxxx@dchbk.us[SMTP:xxxxx@dchbk.us]
> > Reply To: xxxxx@lists.osr.com
> > Sent: Friday, August 08, 2003 10:57 PM
> > To: xxxxx@lists.osr.com
> > Subject: [ntdev] Re: Regmon(a new puzzle)
> >
> > Responsible companies don’t try to change fundamental design decisions
of
> > the Windows security model, and don’t try to make money by claiming to
do
> > the impossible.
> >
> > There’s only about 20 years of literature on this topic. The Orange Book
> > being a noteworthy example.
> >
> > A system is secure against an attack when (give or take bugs) it is
> > impossible to perform the attack. Not hard. Not ‘impossible except on
> > Wednesdays.’ In some cases, there’s value by making an attack merely
> > extremely difficult. There’s no value in taking an attack that is
already
> > difficult and making it slightly more difficult.
> >
> Sure but we aren’t living in black&write world. OS bugs play very
important
> role. This week a worm utilizing RPC buffer overflow was able to run code
> with local system privileges and installed itself using Run registry key.
If
> there is a driver protecting this key as Antony wants, it would stop worm
> from spreading. Now the question is if it makes sense to write such a
> software. From security standpoint the answer is clear: no. Code running
> with local system privileges can do virtually anything and can be always
> targeted against such a protection and the overall security isn’t improved
> at all. On the other hand, the same is true for any existing antivirus.
For
> privileged code there is always a way how to avoid them. Does it mean
> running antivirus or any kind of local protection doesn’t make sense? I
> don’t think so. Most of worms, trojans and viruses are written by clueless
> script kiddies who modify already existing virus code or exploit. System
> protected this way is always vulnerable to targeted sophisticated attack
but
> can stop most of existing fauna and new mutations. Yes, it can make false
> sense of security and probability of targeted attack raises with number of
> user of a product. It means, more products, lesser chance of attack. It is
> similar with browsers and mailers: most users use IE and Outlook / OE and
> most HTML exploits are targeted against this software. With Mozilla there
is
> much better chance user won’t be infected. Not because Mozilla is better
but
> because it isn’t so widely used. So if there is a lot of different
> “security” software which don’t improve real security and only makes
attacks
> harder, it can be an improvement for whole 'Net. MS was successful to
create
> Windows monoculture and monocultures are susceptible to epidemies.
Products
> of this type can add necessary differences. Developers and users just have
> to understand limitations and don’t speak about security when only make
> things more obscure.
>
> The other problem is when an inexperienced developer tries to write
> something like this and his driver makes system unstable. Then all
possible
> positive effects are destroyed.
>
> Best regards,
>
> Michal Vodicka
> STMicroelectronics Design and Application s.r.o.
> [michal.vodicka@st.com, http:://www.st.com]
>
>
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@garlic.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>

It is perhaps worth keeping in mind that the first principle of military
security thinking is “need to know”.

This is, quite simply, security by obscurity. If you don’t know it, you
*probably* can’t give it away when asked. Of course, you always might
invent some totally fanciful story to tell your quizzer, and since you don’t
know the real answer, your fake one might just happen to be the same.

The military seems to consider that a smaller danger than telling everyone
the real story so they can be sure to not tell others.

On the inside, you want to do everything possible to make the firewalls rock
solid so nobody can get past them.

On the outside, you want to keep your mouth shut about how things work, just
in case the bad guy happens to be smarter than you are and can see the hole
you missed.

Keeping your mouth shut won’t prevent someone getting thru a hole you missed
in the protection. But it sure ups the ante in the game.

Loren

Loren Wilton wrote:

On the outside, you want to keep your mouth shut about how things work, just
in case the bad guy happens to be smarter than you are and can see the hole
you missed.

Hear, hear! I think it’s also true that the more places a given tidbit
appears, the more likely it becomes that evil people can find it with
simple web searches. And don’t forget that online conversations are
archived – they don’t just evaporate into the air when we finish having
them.

I keep getting shouted down every time I suggest in one of these forums
that people not publicize exploits or specific techniques that can be
used to create them. One of the answers I keep getting is along the
lines of, “it’s been well known for years how to X, so it does no harm
to talk about it.” If “X” is how to make a stink bomb in your basement
chem lab, I guess that’s fine and dandy. If it’s about how to make a
thermonuclear bomb, it’s definitely not. [Hint to any evil person who
may be reading this: start by pulverizing your plutonium, then take
several deep breaths.]

Another answer I frequently get is, “we need to publicize security holes
so they’ll get fixed.” I find this idea to be incredibly arrogant:
publicizing an exploit forces the hand of whoever might already be
working on a fix, or whoever might be working on more important things.
It’s equivalent to saying, “I know what should be done first better than
you do, and you haven’t done it fast enough to suit me.” The responsible
thing to do is to privately communicate with the owner of the problem
AND THEN KEEP QUIET.

Look, we all have a natural desire to show off. One of the ways to show
off is to prove to our peers that we know secrets. The point is, *our*
secrets as driver developers are more dangerous than most peoples’.


Walter Oney, Consulting and Training
Basic and Advanced Driver Programming Seminars
Check out our schedule at http://www.oneysoft.com

Very thoughtful, and very appreciative…

It might be like a booby trap that I understood long before.

Thanx to both of you.

-prokash

----- Original Message -----
From: “Walter Oney”
Newsgroups: ntdev
To: “Windows System Software Developers Interest List”
Sent: Saturday, August 16, 2003 3:32 AM
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

> Loren Wilton wrote:
> > On the outside, you want to keep your mouth shut about how things work,
just
> > in case the bad guy happens to be smarter than you are and can see the
hole
> > you missed.
>
> Hear, hear! I think it’s also true that the more places a given tidbit
> appears, the more likely it becomes that evil people can find it with
> simple web searches. And don’t forget that online conversations are
> archived – they don’t just evaporate into the air when we finish having
> them.
>
> I keep getting shouted down every time I suggest in one of these forums
> that people not publicize exploits or specific techniques that can be
> used to create them. One of the answers I keep getting is along the
> lines of, “it’s been well known for years how to X, so it does no harm
> to talk about it.” If “X” is how to make a stink bomb in your basement
> chem lab, I guess that’s fine and dandy. If it’s about how to make a
> thermonuclear bomb, it’s definitely not. [Hint to any evil person who
> may be reading this: start by pulverizing your plutonium, then take
> several deep breaths.]
>
> Another answer I frequently get is, “we need to publicize security holes
> so they’ll get fixed.” I find this idea to be incredibly arrogant:
> publicizing an exploit forces the hand of whoever might already be
> working on a fix, or whoever might be working on more important things.
> It’s equivalent to saying, “I know what should be done first better than
> you do, and you haven’t done it fast enough to suit me.” The responsible
> thing to do is to privately communicate with the owner of the problem
> AND THEN KEEP QUIET.
>
> Look, we all have a natural desire to show off. One of the ways to show
> off is to prove to our peers that we know secrets. The point is, our
> secrets as driver developers are more dangerous than most peoples’.
>
> –
> Walter Oney, Consulting and Training
> Basic and Advanced Driver Programming Seminars
> Check out our schedule at http://www.oneysoft.com
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@garlic.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>

> Another answer I frequently get is, "we need to publicize security holes

so they’ll get fixed." I find this idea to be incredibly arrogant:
publicizing an exploit forces the hand of whoever might already be
working on a fix, or whoever might be working on more important things.
It’s equivalent to saying, “I know what should be done first better than
you do, and you haven’t done it fast enough to suit me.” The responsible
thing to do is to privately communicate with the owner of the problem
AND THEN KEEP QUIET.

Yes, but if they then don’t fix the exploit in a reasonable amount of
time - as has been the case with numerous software companies in the past

  • then the responsible thing to do IS to force their hand. Have you been
    reading the news stories recently about the voting machine software
    company that’s been shipping their software with horrendous security
    holes in it for years and has done nothing about it until a group of
    university researchers called them to task publically?


Nick Ryan (MVP for DDK)

Yes there are cases like this —

Some are just greedy. Look at how many co’s went to public, who were the
underwriters, how did they belly up !!!

And there are situation(s) like the surgery went in in Singapore, experts
fought days and nights, end result is known to all of us. Same with some of
the viruses …

Reasonable time ( hard to quantify ), think about the recent power outage

IF EVERYTHING IS SOLVABLE EASILY THEN THE WORLD BECOMES A VERY VERY DULL
PLACE TO LIVE !! IT IS THE UNKOWNS THAT KEEP US WONDERING, MAKE US CURIOUS,
AND HOPEFUL. BUT IT IS DIFFERENT TO HAVE A WILD GUESS, AND AN INTELLIGENT
EFFORT FOR A NEAR CLOSE SOLUTION.

-prokash

-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com]On Behalf Of Nick Ryan
Sent: Saturday, August 16, 2003 10:19 AM
To: Windows System Software Developers Interest List
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new
puzzle))

Another answer I frequently get is, “we need to publicize security holes
so they’ll get fixed.” I find this idea to be incredibly arrogant:
publicizing an exploit forces the hand of whoever might already be
working on a fix, or whoever might be working on more important things.
It’s equivalent to saying, “I know what should be done first better than
you do, and you haven’t done it fast enough to suit me.” The responsible
thing to do is to privately communicate with the owner of the problem
AND THEN KEEP QUIET.

Yes, but if they then don’t fix the exploit in a reasonable amount of
time - as has been the case with numerous software companies in the past

  • then the responsible thing to do IS to force their hand. Have you been
    reading the news stories recently about the voting machine software
    company that’s been shipping their software with horrendous security
    holes in it for years and has done nothing about it until a group of
    university researchers called them to task publically?


Nick Ryan (MVP for DDK)


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@vormetric.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

> Have you been

reading the news stories recently about the voting machine software
company that’s been shipping their software with horrendous security
holes in it for years and has done nothing about it until a group of
university researchers called them to task publically?

Well yes. But from the descriptions I saw (admittedly filtered thru
clueless news wonkers), I didn’t see a whole lot of particularly bad
security holes, just some rather odd coding choices. Which might not have
been odd if the researchers had known the entire design of the product,
rather than just poking at a few parts of it to see how those parts worked.

  1. An obvious basic design assumption of the voting software was that it
    operated in a secure environment, and idiots didn’t have access to the
    computing hardware. That happens to be a basic contract spec item for all
    voting software, and has for as long as computers have been used. Getting
    near the mainframe that is reading punched cards and counting votes is about
    as easy as getting into an active missle silo in most jurisdictions.
    Presumably the contract for pc-based counting software was written with
    similar assumptions.

Whether this is a correct or incorrect assumption to make in a contract is a
completely separate argument. In most places counting votes, it is a
perfectly reasonable assumption with a large machine, and should be a
reasonable assumption with a small machine; but may not be.

  1. Having a replicated database is a basic and desirable design crieteria,
    especially if the database is supplied by a third party and thus inherently
    untrustworthy. Again, this was likely a contract line item. Which database
    is used for any ad-hoc report is immaterial, so long as the databases are
    validated against each other at some reasonable points.

The news reports said that it was a “flaw” that report A came from one
database and report B came from another database, because you could muck
with the second database to get different results. There was no indication
that they actually tried this. Likely the software would have detected this
db corruption and repaired it from the other two databases before doing the
main tally report, even if it didn’t do it while running the ad-hoc report.

  1. The basis of the report seemed to be the concept that if you left these
    vote counting machines on your desk at home, your 13-year-old could easily
    go in with Access and change the vote counts. See item 1) above. Only a
    clueless college researcher would likely to be dumb enough to consider doing
    this in the first place.

Oh, a clueless runner of vote counting in some district with 75 voters might
also consider it. But if that runner of vote counting actually read and was
guided by the district laws on how ballots are to be protected, then this
wouldn’t and couldn’t happen. You see, rigged elections have happened in
the past using paper ballots (or colored stones, for that matter) and all
jurisdictions have rules for care and feeding of ballots between voting,
collection, storage, and counting, that are designed to eliminate both the
chance that the Smith Gang can swap a box of ballots for the ones they made,
and also designed to see that no one person has control of the ballots
before they are counted, and can thus make similar modifications out of zeal
to see that his or her party wins the election.

Those same rules would suggest that it would be a really bad idea to store
the electronic votes in the mayor’s son’s PC in the living room connected to
the internet with no virus software before being counted. The “researchers”
seemed (from the news reports, at least) to be unaware of this concept.

Now, there may well be problems with vote counting software. But given that
vote counting software is a) written to contract, and b) validated in source
by an external audit group, c) random known votes are tested and
spot-checked for giving the correct known results, and d) operated in a
physically secure environment with multiple cleared people present (“guard
against the guards”), I think that in all probablility those “researchers”
were probably pretty clueless about how the machines worked. If you
eliminate any of those contstraints (most especially the physical access
controls) then all bets are off. But people are, or at least should be
aware of this, because the voting laws most everywhere ARE aware of this,
and specify physical access controls and protocols.

Rant off.

Loren

>>Look, we all have a natural desire to show off. One of the ways to show
off is to prove to our peers that we know secrets. The point is, *our*
secrets as driver developers are more dangerous than most peoples’.<<

“Do not adjust your emailer… We are in control… Welcome to the Taliban
Zone…”

I can see Walter walking out to the podium at one of his lectures:

“Oney, Walter Oney” with the song “Secret Agent Man” playing in the
background. Sorry, I could not resist :slight_smile:

Jamey Kirby, Windows DDK MVP
StorageCraft Inc.
xxxxx@storagecraft.com
http://www.storagecraft.com

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com]
On Behalf Of Walter Oney
Sent: Saturday, August 16, 2003 3:32 AM
To: Windows System Software Developers Interest List
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

Loren Wilton wrote:

On the outside, you want to keep your mouth shut about how things work,
just
in case the bad guy happens to be smarter than you are and can see the
hole
you missed.

Hear, hear! I think it’s also true that the more places a given tidbit
appears, the more likely it becomes that evil people can find it with
simple web searches. And don’t forget that online conversations are
archived – they don’t just evaporate into the air when we finish having
them.

I keep getting shouted down every time I suggest in one of these forums
that people not publicize exploits or specific techniques that can be
used to create them. One of the answers I keep getting is along the
lines of, “it’s been well known for years how to X, so it does no harm
to talk about it.” If “X” is how to make a stink bomb in your basement
chem lab, I guess that’s fine and dandy. If it’s about how to make a
thermonuclear bomb, it’s definitely not. [Hint to any evil person who
may be reading this: start by pulverizing your plutonium, then take
several deep breaths.]

Another answer I frequently get is, “we need to publicize security holes
so they’ll get fixed.” I find this idea to be incredibly arrogant:
publicizing an exploit forces the hand of whoever might already be
working on a fix, or whoever might be working on more important things.
It’s equivalent to saying, “I know what should be done first better than
you do, and you haven’t done it fast enough to suit me.” The responsible
thing to do is to privately communicate with the owner of the problem
AND THEN KEEP QUIET.

Look, we all have a natural desire to show off. One of the ways to show
off is to prove to our peers that we know secrets. The point is, *our*
secrets as driver developers are more dangerous than most peoples’.


Walter Oney, Consulting and Training
Basic and Advanced Driver Programming Seminars
Check out our schedule at http://www.oneysoft.com


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@storagecraft.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

Did you write the software for the voting machines in question?

Jamey Kirby, Windows DDK MVP
StorageCraft Inc.
xxxxx@storagecraft.com
http://www.storagecraft.com

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com]
On Behalf Of Loren Wilton
Sent: Saturday, August 16, 2003 11:17 AM
To: Windows System Software Developers Interest List
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

Have you been
reading the news stories recently about the voting machine software
company that’s been shipping their software with horrendous security
holes in it for years and has done nothing about it until a group of
university researchers called them to task publically?

Well yes. But from the descriptions I saw (admittedly filtered thru
clueless news wonkers), I didn’t see a whole lot of particularly bad
security holes, just some rather odd coding choices. Which might not have
been odd if the researchers had known the entire design of the product,
rather than just poking at a few parts of it to see how those parts worked.

  1. An obvious basic design assumption of the voting software was that it
    operated in a secure environment, and idiots didn’t have access to the
    computing hardware. That happens to be a basic contract spec item for all
    voting software, and has for as long as computers have been used. Getting
    near the mainframe that is reading punched cards and counting votes is about
    as easy as getting into an active missle silo in most jurisdictions.
    Presumably the contract for pc-based counting software was written with
    similar assumptions.

Whether this is a correct or incorrect assumption to make in a contract is a
completely separate argument. In most places counting votes, it is a
perfectly reasonable assumption with a large machine, and should be a
reasonable assumption with a small machine; but may not be.

  1. Having a replicated database is a basic and desirable design crieteria,
    especially if the database is supplied by a third party and thus inherently
    untrustworthy. Again, this was likely a contract line item. Which database
    is used for any ad-hoc report is immaterial, so long as the databases are
    validated against each other at some reasonable points.

The news reports said that it was a “flaw” that report A came from one
database and report B came from another database, because you could muck
with the second database to get different results. There was no indication
that they actually tried this. Likely the software would have detected this
db corruption and repaired it from the other two databases before doing the
main tally report, even if it didn’t do it while running the ad-hoc report.

  1. The basis of the report seemed to be the concept that if you left these
    vote counting machines on your desk at home, your 13-year-old could easily
    go in with Access and change the vote counts. See item 1) above. Only a
    clueless college researcher would likely to be dumb enough to consider doing
    this in the first place.

Oh, a clueless runner of vote counting in some district with 75 voters might
also consider it. But if that runner of vote counting actually read and was
guided by the district laws on how ballots are to be protected, then this
wouldn’t and couldn’t happen. You see, rigged elections have happened in
the past using paper ballots (or colored stones, for that matter) and all
jurisdictions have rules for care and feeding of ballots between voting,
collection, storage, and counting, that are designed to eliminate both the
chance that the Smith Gang can swap a box of ballots for the ones they made,
and also designed to see that no one person has control of the ballots
before they are counted, and can thus make similar modifications out of zeal
to see that his or her party wins the election.

Those same rules would suggest that it would be a really bad idea to store
the electronic votes in the mayor’s son’s PC in the living room connected to
the internet with no virus software before being counted. The “researchers”
seemed (from the news reports, at least) to be unaware of this concept.

Now, there may well be problems with vote counting software. But given that
vote counting software is a) written to contract, and b) validated in source
by an external audit group, c) random known votes are tested and
spot-checked for giving the correct known results, and d) operated in a
physically secure environment with multiple cleared people present (“guard
against the guards”), I think that in all probablility those “researchers”
were probably pretty clueless about how the machines worked. If you
eliminate any of those contstraints (most especially the physical access
controls) then all bets are off. But people are, or at least should be
aware of this, because the voting laws most everywhere ARE aware of this,
and specify physical access controls and protocols.

Rant off.

Loren


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@storagecraft.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

I’ll nominate Loren to head the blue-ribbon commission that will
inevitably be formed to investigate the scandal. :slight_smile:

Jamey Kirby wrote:

Did you write the software for the voting machines in question?

Jamey Kirby, Windows DDK MVP
StorageCraft Inc.
xxxxx@storagecraft.com
http://www.storagecraft.com

-----Original Message-----
From: xxxxx@lists.osr.com [mailto:xxxxx@lists.osr.com]
On Behalf Of Loren Wilton
Sent: Saturday, August 16, 2003 11:17 AM
To: Windows System Software Developers Interest List
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

>Have you been
>reading the news stories recently about the voting machine software
>company that’s been shipping their software with horrendous security
>holes in it for years and has done nothing about it until a group of
>university researchers called them to task publically?

Well yes. But from the descriptions I saw (admittedly filtered thru
clueless news wonkers), I didn’t see a whole lot of particularly bad
security holes, just some rather odd coding choices. Which might not have
been odd if the researchers had known the entire design of the product,
rather than just poking at a few parts of it to see how those parts worked.

  1. An obvious basic design assumption of the voting software was that it
    operated in a secure environment, and idiots didn’t have access to the
    computing hardware. That happens to be a basic contract spec item for all
    voting software, and has for as long as computers have been used. Getting
    near the mainframe that is reading punched cards and counting votes is about
    as easy as getting into an active missle silo in most jurisdictions.
    Presumably the contract for pc-based counting software was written with
    similar assumptions.

Whether this is a correct or incorrect assumption to make in a contract is a
completely separate argument. In most places counting votes, it is a
perfectly reasonable assumption with a large machine, and should be a
reasonable assumption with a small machine; but may not be.

  1. Having a replicated database is a basic and desirable design crieteria,
    especially if the database is supplied by a third party and thus inherently
    untrustworthy. Again, this was likely a contract line item. Which database
    is used for any ad-hoc report is immaterial, so long as the databases are
    validated against each other at some reasonable points.

The news reports said that it was a “flaw” that report A came from one
database and report B came from another database, because you could muck
with the second database to get different results. There was no indication
that they actually tried this. Likely the software would have detected this
db corruption and repaired it from the other two databases before doing the
main tally report, even if it didn’t do it while running the ad-hoc report.

  1. The basis of the report seemed to be the concept that if you left these
    vote counting machines on your desk at home, your 13-year-old could easily
    go in with Access and change the vote counts. See item 1) above. Only a
    clueless college researcher would likely to be dumb enough to consider doing
    this in the first place.

Oh, a clueless runner of vote counting in some district with 75 voters might
also consider it. But if that runner of vote counting actually read and was
guided by the district laws on how ballots are to be protected, then this
wouldn’t and couldn’t happen. You see, rigged elections have happened in
the past using paper ballots (or colored stones, for that matter) and all
jurisdictions have rules for care and feeding of ballots between voting,
collection, storage, and counting, that are designed to eliminate both the
chance that the Smith Gang can swap a box of ballots for the ones they made,
and also designed to see that no one person has control of the ballots
before they are counted, and can thus make similar modifications out of zeal
to see that his or her party wins the election.

Those same rules would suggest that it would be a really bad idea to store
the electronic votes in the mayor’s son’s PC in the living room connected to
the internet with no virus software before being counted. The “researchers”
seemed (from the news reports, at least) to be unaware of this concept.

Now, there may well be problems with vote counting software. But given that
vote counting software is a) written to contract, and b) validated in source
by an external audit group, c) random known votes are tested and
spot-checked for giving the correct known results, and d) operated in a
physically secure environment with multiple cleared people present (“guard
against the guards”), I think that in all probablility those “researchers”
were probably pretty clueless about how the machines worked. If you
eliminate any of those contstraints (most especially the physical access
controls) then all bets are off. But people are, or at least should be
aware of this, because the voting laws most everywhere ARE aware of this,
and specify physical access controls and protocols.

Rant off.

Loren


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@storagecraft.com
To unsubscribe send a blank email to xxxxx@lists.osr.com


Nick Ryan (MVP for DDK)

Folks,

If you start getting worried about enabling bad behavior with this mailing
list, or nfts, you might as well go home. The whole thing is one enormous
crib-sheet for the nefarious.

Just getting a driver to work at all is, from the looks of the traffic here,
very challenging. If you ‘don’t reveal exploits’, you may, indeed, be
accomplishing a bit of useful ‘security by obscurity.’ On the other hand,
while you are carefully avoiding revealing the details of the warheads,
these lists are cookbooks for the delivery systems.

This chain of email grew out of a discussion of whether there was any value
in a device that could intercept access to certain registry keys, from a
chain of possibilities that went from:

  1. set the ACL so they can’t be fiddled
  2. run a service that monitors them in case they get fiddled anyway
  3. run code in the kernel to intercept the system calls that could fiddle
    them.
  4. run code in the kernel to intercept code in the kernel that would
    circumvent #3.

And on from there, ad seriatum. I think that Prokash and I disagree on where
you would draw the line between the useful and the pointless, but that’s
just life.

This whole discussion was quite informative, I bet, to anyone contemplating
an exploit involving sticking things into the run keys.

There are, no doubt, script-kiddies who wouldn’t have known about ACL
protection on registry keys unless told, and we go on from there.

This list could be run as a closed guild. New members could be required to
provide some real identity and credentials. (I’m NOT PROPOSING THIS.) It
isn’t. Anyone can sign up. So, a modicum of common sense would seem to
apply. However, any discussion amongst those trying to maintain security
will inevitably contain highly valuable nuggets to those attempting to break
it.

Yep,

Opinion differes, since we dont necessarily see same thing the same way!!.
So I’m not particularly worried about it. And lot of time(s) we all may be
wrong, right or any such combination(s), and that too depends on the notion
of right and wrong.

Probably I should have kept my mouth shut !

At least I tried to indirect as much as possible, but then again who knows
who is out there. Once, I came across a line “Hackers dies at 30”. Hackers
here are in the sense of negative sense. And the meaning is that they become
disinterested, but that may not be true anylonger, because of the wider
impact. And another line is “Trusting trust” though it was coined under a
different context ( meaning dont trust someone elses code). AND I TRIED TO
APPLY THOSE IN MY notes, so that a pile of unstructured, hard to digest
informations only goes out, but then again I might be wrong. On the
otherhand, there are some people they are trying to solve, and why not they
get some hints, and alternatives so that they can achieve their goal. Boy!,
it is really hard to draw a line.

Can someone tell me a credible 7/11 store, so I have a fair ( no no not
fair, fair is 50-50), a very good chance to win a lotto, otherwise I know a
whole hell of a lot of people here is this list ( to name a few, not in any
particular order :: Peter, Tony, Walter, ThomasFDivine, Nick, Max, JameyK,
Craig, Benson, Sirin, Loren, Bill Mcenjee, Gary Little, Alberto(if you dont
know alberto, U problably dont belong to thislist) etc.etc) , who I am
requesting to send signal when something is probably dengerous to be brought
over here to begin with.

I will definitely be more careful, then again, pls shut me off whenever
approriate !!!.

-prokash
----- Original Message -----
From: “benson”
To: “Windows System Software Developers Interest List”
Sent: Sunday, August 17, 2003 6:27 AM
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

Folks,

If you start getting worried about enabling bad behavior with this mailing
list, or nfts, you might as well go home. The whole thing is one enormous
crib-sheet for the nefarious.

Just getting a driver to work at all is, from the looks of the traffic here,
very challenging. If you ‘don’t reveal exploits’, you may, indeed, be
accomplishing a bit of useful ‘security by obscurity.’ On the other hand,
while you are carefully avoiding revealing the details of the warheads,
these lists are cookbooks for the delivery systems.

This chain of email grew out of a discussion of whether there was any value
in a device that could intercept access to certain registry keys, from a
chain of possibilities that went from:

1) set the ACL so they can’t be fiddled
2) run a service that monitors them in case they get fiddled anyway
3) run code in the kernel to intercept the system calls that could fiddle
them.
4) run code in the kernel to intercept code in the kernel that would
circumvent #3.

And on from there, ad seriatum. I think that Prokash and I disagree on where
you would draw the line between the useful and the pointless, but that’s
just life.

This whole discussion was quite informative, I bet, to anyone contemplating
an exploit involving sticking things into the run keys.

There are, no doubt, script-kiddies who wouldn’t have known about ACL
protection on registry keys unless told, and we go on from there.

This list could be run as a closed guild. New members could be required to
provide some real identity and credentials. (I’m NOT PROPOSING THIS.) It
isn’t. Anyone can sign up. So, a modicum of common sense would seem to
apply. However, any discussion amongst those trying to maintain security
will inevitably contain highly valuable nuggets to those attempting to break
it.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@garlic.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

When Fred Cohen first bull-in-the-china-closetted himself into the security
scene, it was very much this argument about viruses that made him fame and
fortune. Since viruses steal the privileges of users, you can have all the
‘security model’ features you want, and still be toast.

Now that you remind me of this, I’m inclined to deliver a big, fat,
(partial) retraction to much of my pontification on this topic. Given
viruses and worms, I agree that it makes sense to add code that catches,
even with incomplete security, unusual events that might result from an
attack.

However, in this particular case, I think that your last comment is most
apposite. The incremental benefit of a complex, fragile, kernel mechanism
for this purpose over a simple user-mode service is small, and the risks
involved are much higher.

Benson,

U and others proponent of using usr mode, might be right.

But let me iterate a bit again —

  1. I mentioned canonical, the reson was that an integral component(s) that
    can be used for different things in the space we are talking about. And when
    I was only saying that how to do something that was asked for. I did not
    care much about how it would be used, what is the immediate need etc. etc.
    Is it safe to do?. Is it a std. practice?. Should it be allowed to do ?. Are
    there any reliability ananysis for it ? These really brings us to a lot of
    different topics that may very well encompass, philosophy, psychology,
    engineering, maths and stats, comp sc. etc. etc.

  2. Also random sampling ( in this case not so random) led me to belief that
    some such things are necessary. But again, anything works with lesser
    complexity is better.

  3. Finally, the field ( computer security ) is quite vast, and lot of stuff
    are being implemented in the usr space too. Heruristics, and AI are being
    considered too for worms, viruses etc. There are computer virology,
    biometric computing, and other things being considered - lot of it is in the
    usr space of course. But they are not totally devoid of krnl component. In
    most case, it is the combination(s).
    And a much much more harder nut to crack is totally different approach.

So how could I say anything about I am totally right ?

-prokash

----- Original Message -----
From: “benson”
To: “Windows System Software Developers Interest List”
Sent: Sunday, August 17, 2003 1:08 PM
Subject: [ntdev] RE: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

> When Fred Cohen first bull-in-the-china-closetted himself into the
security
> scene, it was very much this argument about viruses that made him fame and
> fortune. Since viruses steal the privileges of users, you can have all the
> ‘security model’ features you want, and still be toast.
>
> Now that you remind me of this, I’m inclined to deliver a big, fat,
> (partial) retraction to much of my pontification on this topic. Given
> viruses and worms, I agree that it makes sense to add code that catches,
> even with incomplete security, unusual events that might result from an
> attack.
>
> However, in this particular case, I think that your last comment is most
> apposite. The incremental benefit of a complex, fragile, kernel mechanism
> for this purpose over a simple user-mode service is small, and the risks
> involved are much higher.
>
>
>
>
>
> —
> Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
>
> You are currently subscribed to ntdev as: xxxxx@garlic.com
> To unsubscribe send a blank email to xxxxx@lists.osr.com
>
>

Prokash,

I’m really not on a campaign to prove you wrong about this. I promise to
post no more on this topic after the following thought, leaving you free to
post the last word:

If you are concerned about a determined, kernel-savvy opponent, I’m not
persuaded that there’s anything you can do. If you are concerned about an
opportunistic, ‘just call RegOpenKeyEx …’ opponent, then I wonder what the
kernel version of the protection buys you.

All of this is just me, a cranky, middle-aged traditional security geek.
Your Mileage May Vary.

Benson,

I did send you the response.

-prokash
----- Original Message -----
From: “benson”
To: “Windows System Software Developers Interest List”
Sent: Sunday, August 17, 2003 7:54 PM
Subject: [ntdev] RE: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

Prokash,

I’m really not on a campaign to prove you wrong about this. I promise to
post no more on this topic after the following thought, leaving you free to
post the last word:

If you are concerned about a determined, kernel-savvy opponent, I’m not
persuaded that there’s anything you can do. If you are concerned about an
opportunistic, ‘just call RegOpenKeyEx …’ opponent, then I wonder what the
kernel version of the protection buys you.

All of this is just me, a cranky, middle-aged traditional security geek.
Your Mileage May Vary.


Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256

You are currently subscribed to ntdev as: xxxxx@garlic.com
To unsubscribe send a blank email to xxxxx@lists.osr.com

On Fri, 2003-08-15 at 22:07, Michal Vodicka wrote:

MS was successful to create
Windows monoculture and monocultures are susceptible to epidemies. Products
of this type can add necessary differences. Developers and users just have
to understand limitations and don’t speak about security when only make
things more obscure.

It used to be commonly accepted that you didn’t build a network with all
cisco routers, just in case IOS had a bug. We run OpenBSD on our
bastion host and Windows and Linux on our servers. Variety is the spice
of life. :slight_smile:

I completely agree with Michal’s take on this. Just because security is
impossible in certain contexts is no excuse for not trying.

-sd

Michal Vodicka wrote:

Our proverb says “every coin has two sides”. If we live in the ideal world
and every company is responsible you couldn’t be more right. Unfortunately,
there are companies who ignore security holes until published. I was reading
NTBUGTRAQ for years and there was a lot of examples including MS. I agree
company should get a chance i.e. reasonable amount of time to respond and
next time to make a fix. Opinions what is reasonable time differ. IIRC the
consensus was a week for response and a month for a fix.

And who are those folks to be deciding what’s reasonable? That’s what I
mean by arrogance. I repeat my belief that it’s morally indefensible to
publicize a security hole for this purpose.


Walter Oney, Consulting and Training
Basic and Advanced Driver Programming Seminars
Check out our schedule at http://www.oneysoft.com

Well, as a counterexample, Unisys has a policy that a Priority A UCF has to
be fixed within, I forget, 10 or 15 days from the day it is reported by the
customer. Not from the day the CSE rep comes back from vacation 2 weeks
after the bug is reported. Not 2 weeks after he finally decides to transfer
it to Engineering for a fix. From the time of the report. The Pri A UCF
category includes, among other things, system crashes and/or data
corruption.

Now, if people bothered writing virui for our systems, you can bet that a
virus report would be a Pri A bug. And you can bet that in that case, if
that UCF went out of policy, the CEO of the company would be asking why, and
who is responsible. So someone would need a pretty darn good excuse for the
hole still being open after 2 weeks rather than having had the fix deployed
to all of our customers worldwide.

Now, MS, SCO, or East Taiwan Engineering aren’t Unisys, and they can and
doubtless do have different policies. But if I’m running a bank DP center
and some hacker keeps stealing credit card info with his exploit every day,
I think I might get a little upset if the company getting hacked didn’t come
up with at least an acknowledgement of the problem within 2 or 3 weeks.

Loren

----- Original Message -----
From: “Walter Oney”
Newsgroups: ntdev
To: “Windows System Software Developers Interest List”
Sent: Monday, August 18, 2003 8:00 PM
Subject: [ntdev] Re: Security vs. obscurity (Was: Re: Regmon(a new puzzle))

> And who are those folks to be deciding what’s reasonable? That’s what I
> mean by arrogance. I repeat my belief that it’s morally indefensible to
> publicize a security hole for this purpose.