It might interest you to know that the ARM 9 processor (or some variant of it) is an entirely asynchronous design. I don’t know what point it is at, release-wise, but it’s more than a whiteboard idea. I think their main design goal is very low power consumption when idle, but it also has some interesting properties, like a more diffuse EM signature.
http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=179101800
Also, I don’t buy your argument about Ta, Tb, etc. There aren’t two distinct, and therefore probably unequal, time intervals. There’s one, and the analog state of a particular synchronous bit converges to known value by the time the clock rises/falls. When the input changes near that time, the result is basically arbitrary, but that doesn’t matter – it’s going to be one value or the other.
A lot of hardware wouldn’t work, at all, if this weren’t true.
From: xxxxx@lists.osr.com [xxxxx@lists.osr.com] On Behalf Of Mike Kemp [xxxxx@sintefex.com]
Sent: Saturday, January 06, 2007 6:52 AM
To: Windows System Software Devs Interest List
Subject: Re: [ntdev] confused with DDK docs
Hi Bob
Thanks for coming back on this. I’d intended this as a “food for thought”
item, raising what I thought was a well known (though mostly theoretical)
issue with asynchronous events.
I’d not intended to waste too much time on this, so to all those of us who
are snowed under with development work, please excuse this post, and stop
reading now.
I agree with you that in all practical senses, good design can place any
theoretical error beyond the scope of practical worries. We all hope for
good design but of course many of use here spend a lot of time programming
around problems that slipped through the net into production :-)!
I also agree that with fully synchronous multiple CPUs, the lock signal is
synchronous (btw I never said it was asynchronous in this case, sorry if I
misled you somewhere).
The original thread however was stressing some timing issues that made me
suspect that it may relate to a situation where the CPUs were asynchronous,
as otherwise I, like you, cannot see any issue to talk about. Of course with
synchronous CPUs, simultanoeus events are common and easy to cope with.
The only place I draw a line is with the assertion that asynchronous signals
can be handled with 100% foolproof design, full stop, end of discussion.
I have to say, I’d like to believe it, but simply asserting it is not going
to persuade me, or hopefully, anyone else.
I ran the following short argument (intended to prove my claim that
asynchronous events exist that will break any logic system) passed my
hardware designer, and it raised an question I’ll mention below. First, here
is the argument:
Given we have a synchronous logic system with an asynchronous input, and a
asynchronous event occurs. The logic system is required to enter one of two
states A or B depending on whether the event occurs before or after the
system clock in question.
Assume that relative to a specific system clock, there is a latest time Ta
at which the asynchronous event will be recognised and outcome A results.
Assume there is also an earliest later time Tb at which the event is not
recognised, and outcome B results.
Clearly Ta != Tb as both results cannot occur.
Therefore there exists a time Tx such that Ta < Tx < Tb at which neither
outcome would result, otherwise this would invalidate our assumption that Ta
was already the latest time for outcome A and Tb was already the earliest
time for outcome B.
You then have a system that has broken, as the asynchronous event has caused
an outcome that is neither A nor B.
My hardware designer said that all that would happen is that in the
intervening period, event A or B would occur at random because of noise. (We
can eliminate component tolerance as this is a “specific” system).
However, does this mean that the only reason we can make a system 100%
reliable is because of the existence of random noise? A perfect system
without noise is surely proved to be at risk by the argument above.
Also, even in the presence of noise, the above argument applies, as all it
does it create a whole bunch of Ta and Tb pairs. All we have to do is
examine the first such pair, apply the above argument, and sure enough there
exists a time when the logic system fails to make the decision.
Someone has suggested this is just a glitch situation, add a clock delay and
latch it again. However all this does it make a new logic system, slightly
different to the preceeding one. The argument above applies to ALL logic
systems, however many double checks are made. I fear this also includes
adding a software workaround, as this is also just fiddling within the black
box.
I’d truly like to understand where the argument above falls down, if it
does. Regrettably my old university lecturers are mostly gone now so I can’t
go back to them. If anyone knows of a web forum for this I’d like to dip in
as time allows.
Anyway, please take this as intended, food for thought, not a criticism of
designers and hopefully not the ramblings of an incompetent!
Best regards
Mike
----- Original Message -----
From: Bob Kjelgaard
To: Windows System Software Devs Interest List
Sent: Friday, January 05, 2007 4:17 PM
Subject: RE: [ntdev] confused with DDK docs
Well, in ten years [the first 10 of my career] of designing, testing, and
debugging hardware I’ve looked at plenty of 'scopes, and this is simply not
true. Yes, there is a period where the signal is switching and is specified
as “indeterminate” because it is between the switching thresholds of its
receivers, and yes, any designer who isn’t a total idiot understands rise
times, fall times, threshold levels, fan-in, fan-out, even wire delays,
transmission line effects, noise suppression and all the rest of the
wonderful world of practical electronic design.
But it is simple arithmetic to add up all those delays along each path and
guarantee a signal is where it needs to be before the minimum setup time for
the receiving latch wrt the clock- and that the signal also meets minimum
hold times, and that the clock pulse width is within specifications. If
even that is a challenge, then there are simulation and design checking /
analysis programs to pick up your slack [and more].
The probability of the lock instruction failing is not 0, but not because of
this issue- it’s not 0 because hardware fails- parts can go out of spec or
even fail catastrophically as they age. It’s also not 0 because even fresh
parts may be out-of-spec (test processes are not 100%, ever). It’s even not
0 because this stuff is made by human beings, which are notoriously
fallible. But all of those probabilities are about as low as they can be
made. Systems can run reliably for years because of this [and additional
work done on fault tolerant design].
But the circuit designs simply do not have this flaw. If they do, then they
are bad designs. I’ve had to deal with such designs, some of them my own,
of course. But to say you can’t properly design a synchronous state machine
to deal with asynchronous events is just not true.
Finally, you keep calling the lock signal asynchronous. It can’t be- it has
to come out of the same synchronous state machine that eventually decides
it’s been granted the lock.
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
To unsubscribe, visit the List Server section of OSR Online at
http://www.osronline.com/page.cfm?name=ListServer
Questions? First check the Kernel Driver FAQ at http://www.osronline.com/article.cfm?id=256
To unsubscribe, visit the List Server section of OSR Online at http://www.osronline.com/page.cfm?name=ListServer