From the documentation on Queued Spin Locks:
“Drivers should normally allocate the structure on the stack each time
they acquire the lock.”
While this doesn’t explain WHY, that’s not unusual for the DDK
documentation (the MS doc style - tell you WHAT to do not why…) Your
implementation did not conform to the documented way to do it and it
broke.
The REAL point here is as Doron pointed out - these locks are “fair”
because they create a queue of requestors - so there is a fair hand-off.
The older implementation turns out to be UN-fair, giving preference to
some processors over other processors (because of the way that memory
bus access is implemented in hardware).
Thus, using the same queuing structure on different processors defeats
the queue mechanism. (That’s the WHY behind the WHAT.)
Regards,
Tony
Tony Mason
Consulting Partner
OSR Open Systems Resources, Inc.
http://www.osr.com
Looking forward to seeing you at the next OSR File Systems class in
Boston, MA April 18-21, 2006.
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf Of Alon
Sent: Saturday, January 21, 2006 10:31 AM
To: ntdev redirect
Subject: Re:[ntdev] Multi CPU Problem with
KeAcquireInStackQueuedSpinLock/KeReleaseInStackQueuedSpinLock
Hi,
My previous struct was really:
struct MYLOCK { KSPIN_LOCK Lock; KIRQL LockIrql; }
AcquireLock(MYLOCK* Lock) {
KeAcquireSpinLock(&Lock->Lock,
&Lock->LockIrql);
struct MYLOCK { KSPIN_LOCK Lock; KLOCK_QUEUE_HANDLE
hLockHandle ;}
I understand from your answer that it won’t work for
that case.
I just wonder where in the documentation this
restriciton appears?
Alon
“Doron Holan” wrote in
message news:xxxxx@ntdev…
Unlike the old spinlock implementations, you cannot
share the KIRQL
field which is used to acquire the lock among many
acquires (which I
think is not correct, but it works by circumstance).
Something like
this is what I am referring to:
struct MYLOCK { KSPIN_LOCK Lock; KIRQL LockIrql; }
AcquireLock(MYLOCK* Lock) {
KeAcquireSpinLock(&Lock->Lock,
&Lock->LockIrql);
Does not work for in stack queued spinlocks. If you
are not following
pattern, please ignore this message.
For in stack queued spinlocks, the KLOCK_QUEUE_HANDLE
has to be unique
to each thread that is trying to acquire the lock, you
could not put
this into the MYLOCK structure, it has to be on the
stack (hence the
name
).
D
-----Original Message-----
From: xxxxx@lists.osr.com
[mailto:xxxxx@lists.osr.com] On Behalf
Of Alon
Sent: Friday, January 20, 2006 11:36 AM
To: Windows System Software Devs Interest List
Subject: [ntdev] Multi CPU Problem with
KeAcquireInStackQueuedSpinLock/KeReleaseInStackQueuedSpinLock
Hi,
Lately I decided to replace the use of
KeAcquireSpinLock/Release with the new more efficient
Windows XP KeAcquireInStackQueuedSpinLock/Release.
I run many tests with this change on a Multi CPU
machine, when in boot.ini there was the flag
“/OneCpu”.
On first time I tried to remove the flag and realy
running with Multi CPU, my machine hang after few
seconds.
1. I ran with Driver Verifier that doesn’t bug check
for a dead lock.
2. I generated a “Crash on demand” and I can not see a
deadlock (for example - there is running thread when
in it’s stuck is currently at
hal!KeAcquireInStackQueuedSpinLock+0x50)
I really can not think of reason for this behavior
when even driver verifier doesn’t bug check AND in the
dump I can not recognize a hang.
Does anyone aware of a problem with these functions on
Multi CPU?
Thanks in advance
Alon
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam
protection around
http://mail.yahoo.com
—
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
You are currently subscribed to ntdev as:
xxxxx@microsoft.com
To unsubscribe send a blank email to
xxxxx@lists.osr.com
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
—
Questions? First check the Kernel Driver FAQ at
http://www.osronline.com/article.cfm?id=256
You are currently subscribed to ntdev as: xxxxx@osr.com
To unsubscribe send a blank email to xxxxx@lists.osr.com