Windows System Software -- Consulting, Training, Development -- Unique Expertise, Guaranteed Results


More Info on Driver Writing and Debugging

The free OSR Learning Library has more than 50 articles on a wide variety of topics about writing and debugging device drivers and Minifilters. From introductory level to advanced. All the articles have been recently reviewed and updated, and are written using the clear and definitive style you've come to expect from OSR over the years.

Check out The OSR Learning Library at:

Before Posting...

Please check out the Community Guidelines in the Announcements and Administration Category.

InStackQueuedSpinLock vs reader/writer spinlock, who faster?

tanda996tanda996 Member Posts: 17
edited November 2020 in NTDEV

i'm developing WFP callouts driver and most action doing in DISPATCH_LEVEL (at callout routine). Have many spin lock variants, i'm confused at them.
i know their orientation is difference, but in case both function mostly in READ contention, who will run faster?. thanks.


  • Tim_RobertsTim_Roberts Member - All Emails Posts: 13,939

    Speed is not really the issue for locking; they are very rarely a bottleneck. You choose the lock that fits your need and your restrictions.

    Tim Roberts, [email protected]
    Providenza & Boekelheide, Inc.

  • Jan_BottorffJan_Bottorff Member - All Emails Posts: 472

    Something not so apparent in read/write spinlocks is if you have a lot of cores you can get lock contention on RELEASING the read lock, because there is an interlocked decrement of the reader count. On simple spinlocks, if you own the lock, it's a simple write to release it, so can never have release lock contention. I had to do some head scratching the first time I saw a performance profile that showed many cores spending significant time in the read/write lock release function.

    Lock performance degrades badly as the cores get further away from each other. A few years ago on a Xeon I measured the time required to do an interlocked increment at a dozen or two clock cycles when on the same core, and that degraded 10x to a hundred or two clocks when competing with cores on the same chip (L1/L2 cache thrashes), and degraded another 10x when competing with cores on a different socket/NUMA node (inter-core cache line ownership transfer has to go across an inter processor bus, which is way slower than on chip busses). If it takes on average 1000 cycles, that's like 300ns or only 3M operations/sec (at 3Ghz). This becomes really problematic if you want to get 5M IOPS through a single queue. Data structures that are NUMA aware can perform better on typical servers. Some many core processors look like multiple NUMA nodes in a single socket, so the latency can degrade more than expected for a single physical socket.

    Also note the application architecture matters a lot too. For example, if you have 64 cores running on 64 threads doing I/Os in the same process, there is an OS process address space lock has to get acquired/released when an I/O locks memory pages to do DMA. Having 64 processes with 1 thread/core each spread this address space locking across 64 locks. In a disk performance test I did last year using diskspd, 64 threads in a single processor could only achieve about 1M IOPs. Making the test run 64 diskspd processes, each with 1 thread/core, that I/O performance went up to over 3M IOPs, with no difference in kernel code or hardware configuration.

    On ARM64 systems, not all the read/write locks use the ARMv8.1-A atomic instructions, and many currently still use a load exclusive/store exclusive pair of instructions, which can have even more contention on release because of a longer time window the memory location needs to be unchanged. Load exclusive does not prevent another core from writing the location, it just provides a way for the current core to atomically succeed/fail the store if no other core has changed it. Load exclusive should really be called load with awareness of change for a specified memory location. Intel has interlocked compare exchange, which can't detect if some other core has updated a value between the current core doing a read and then compare exchange, it can only detect if the value is currently the same, not if it has temporarily had other values between the read and compareExchange. The plus on Intel is it's had interlocked memory instructions for a long time, so OS code can assume they are present. If you really need read and then compareExchange, I like the ARM64 design better, as you can tell if the value has ever changed after the read.

    For optimal performance, avoid any shared writable memory, RCU (see is often a good strategy. Locks often work ok if you have a few cores, but if you have 256 cores, locks become a lot less attractive. The problem of getting high performance on many cores is way more complex than which lock will run faster.

  • Peter_Viscarola_(OSR)Peter_Viscarola_(OSR) Administrator Posts: 8,448

    As usual, Mr. Bottorff has taken the time to provide us with some great wisdom. There are many interesting and important points in his post... any one of which could be expanded to article length.

    First, let me point you to what I at least consider the definitive article on synchronization in Windows drivers, written by Mr. Ionescu for The NT Insider some years ago. This should answer any questions you have.

    Next, let me try to make things simpler for you: if you need a spin lock, and you need read/write (that is, shared/exclusive) types of access... you need reader writer spin locks. If you don’t need read/write, in a modern driver you probably want in stack queued spin locks, because they scale better across multiple CPUs.


    Peter Viscarola

Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Upcoming OSR Seminars
OSR has suspended in-person seminars due to the Covid-19 outbreak. But, don't miss your training! Attend via the internet instead!
Developing Minifilters 24 May 2021 Live, Online
Writing WDF Drivers 14 June 2021 Live, Online
Internals & Software Drivers 27 September 2021 Live, Online
Kernel Debugging TBD 2021 Live, Online