views:

1832

answers:

7

I'm using a spin lock to protect a very small critical section. Contention happens very rarely so a spin lock is more appropriate than a regular mutex.

My current code is as follows, and assumes x86 and GCC:

volatile int exclusion = 0;

void lock() {
    while (__sync_lock_test_and_set(&exclusion, 1)) {
        // Do nothing. This GCC builtin instruction
        // ensures memory barrier.
    }
}

void unlock() {
    __sync_synchronize(); // Memory barrier.
    exclusion = 0;
}

So I'm wondering:

  • Is this code correct? Does it correctly ensure mutual exclusion?
  • Does it work on all x86 operating systems?
  • Does it work on x86_64 too? On all operating systems?
  • Is it optimal?
    • I've seen spin lock implementations using compare-and-swap but I'm not sure which is better.
    • According to the GCC atomic builtins documentation (http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html) there's also __sync_lock_release. I'm not an expert on memory barriers so I'm not sure whether it's okay for me to use this instead of __sync_synchronize.
    • I'm optimizing for the case in which there's no contention.

I do not care at all about contention. There may be 1, maybe 2 other threads trying to lock the spin lock once every few days.

+3  A: 

In response to your questions:

  1. Looks ok to me
  2. Assuming the OS supports GCC (and GCC has the functions implemented); this should work on all x86 Operating Systems. The GCC documentation suggests that a warning will be produced if they are not supported on a given platform.
  3. There's nothing x86-64 specific here, so I don't see why not. This can be expanded to cover any architecture that GCC supports, however there maybe more optimal ways of achieving this on non x86 architectures.
  4. You might be slightly better off with using __sync_lock_release() in the unlock() case; as this will decrement the lock and add a memory barrier in a single operation. However, assuming that your assertion that there will rarely be contention; it looks good to me.
Dave Rigby
+1  A: 

I can't comment on correctness, but the title of your question raised a red flag before I even read the question body. Synchronization primitives are devilishly hard to ensure correctness... if at all possible, you're better off using a well-designed/maintained library, perhaps pthreads or boost::thread.

Jason S
I have a very good reason to not use pthreads or boost::threads in this specific case.
Hongli
A: 

One improvement is suggest is using TATAS (test-and-test-and-set). Using CAS operations are considered quite expensive for the processor, so it's better to avoid them if possible. Another thing, make sure you won't suffer from priority inversion (what if a thread with a high priority tries to acquire the lock while a thread with low priority tries to free the lock? On Windows for example this issue will ultimately by solved by the scheduler using a priority boost, but you can explicitly give up your thread's time slice in case you didn't succeed in acquiring the lock in you last 20 tries (for example..)

I'm not sure this is an improvement, given the OP's stated assumption that contention is *extremely* rare. In TATAS, the first test is to cheaply check if the lock is held, and to spin in the cheap non-interlocked code until the lock looks free. Only then does it advance to the expensive interlocked test-and-set. In the OP's case, the lock almost always *is* free, so this just adds another test that 99.99999% of the time immediately falls through to the interlocked test.
Paul McGuire
+5  A: 

Looks fine to me. Btw, here is the textbook implementation that is more efficient even in the contended case.

void lock(volatile int *exclusion)
{
    while (__sync_lock_test_and_set(exclusion, 1))
        while (*exclusion)
            ;
}
sigjuice
A: 

Your unlock procedure doesn't need the memory barrier; the assignment to exclusion is atomic as long as it dword aligned on the x86.

Ira Baxter
The memory barrier isn't there to ensure an atomic write to the lock.
Logan Capaldo
That's right. It doesn't have anything to do with the atomicity of the write. That's my point; it doesn't add anything at all.
Ira Baxter
Yes it does. See http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html
Ken
@Ken: The memory write to location "exclusion" on free will be ordered after the write that locks it if they both execute on the same CPU. If CPU 1 does the lock, and CPU 2 does the unlock, then a memory ordering problem might occur; but the only way for this to occur is for CPU 2 to decide it has the lock, which it cannot reasonably do without attemption to acquire the lock (encounters membar), or reading the "exclusion" location, which it can't reasonably see as 1 until after CPU 1 actually issues the write. The example discussed in Pugh is a double-checked lock. How is that relevant?
Ira Baxter
+1  A: 

If you're on a recent version of Linux, you may be able to use a futex -- a "fast userspace mutex":

A properly programmed futex-based lock will not use system calls except when the lock is contended

In the uncontested case, which you're trying to optimize for with your spinlock, the futex will behave just like a spinlock, without requiring a kernel syscall. If the lock is contested, the waiting takes place in the kernel without busy-waiting.

Commodore Jaeger
+1  A: 

So I'm wondering:

* Is it correct?

In the context mentioned, I would say yes.

* Is it optimal?

That's a loaded question. By reinventing the wheel you are also reinventing a lot of problems that have been solved by other implementations

  • I'd expect a waste loop on failure where you aren't trying to access the lock word.

  • Use of a full barrier in the unlock only needs to have release semantics (that's why you'd use __sync_lock_release, so that you'd get st1.rel on itanium instead of mf, or a lwsync on powerpc, ...). If you really only care about x86 or x86_64 the types of barriers used here or not don't matter as much (but if you where to make the jump to intel's itanium for an HP-IPF port then you wouldn't want this).

  • you don't have the pause() instruction that you'd normally put before your waste loop.

  • when there is contention you want something, semop, or even a dumb sleep in desparation. If you really need the performance that this buys you then the futex suggestion is probably a good one. If you need the performance this buys you bad enough to maintain this code you have a lot of research to do.

Note that there was a comment saying that the release barrier wasn't required. That isn't true even on x86 because the release barrier also serves as an instruction to the compiler to not shuffle other memory accesses around the "barrier". Very much like what you'd get if you used asm ("" ::: "memory" ).

* on compare and swap

On x86 the sync_lock_test_and_set will map to a xchg instruction which has an implied lock prefix. Definitely the most compact generated code (esp. if you use a byte for the "lock word" instead of an int), but no less correct than if you used LOCK CMPXCHG. Use of compare and swap can be used for fancier algorthims (like putting a non-zero pointer to metadata for the first "waiter" into the lockword on failure).

Peeter Joot