views:

527

answers:

4
+3  Q: 

Why do locks work?

If the locks make sure only one thread accesses the locked data at a time, then what controls access to the locking functions?

I thought that boost::mutex::scoped_lock should be at the beginning of each of my functions so the local variables don't get modified unexpectedly by another thread, is that correct? What if two threads are trying to acquire the lock at very close times? Won't the lock's local variables used internally be corrupted by the other thread?

My question is not boost-specific but I'll probably be using that unless you recommend another.

+10  A: 

You only need to have exclusive access to shared data. Unless they're static or on the heap, local variables inside functions will have different instances for different threads and there is no need to worry. But shared data (stuff accessed via pointers, for example) should be locked first.

As for how locks work, they're carefully designed to prevent race conditions and often have hardware level support to guarantee atomicity. IE, there are some machine language constructs guaranteed to be atomic. Semaphores (and mutexes) may be implemented via these.

Sydius
Thanks I was trying to find out if functions have a seperate instance for each thread. But could you provide a link to a source you got this info from?
Tim Matthews
I don't have a source for function variables, sorry. I wish I did. The rest is from the semaphore Wikipedia article.
Sydius
Never mind a search for thread separate stack found this: http://msdn.microsoft.com/en-us/library/ms686774(VS.85).aspx
Tim Matthews
+8  A: 

You're right, when implementing locks you need some way of guaranteeing that two processes don't get the lock at the same time. To do this, you need to use an atomic instruction - one that's guaranteed to complete without interruption. One such instruction is test-and-set, an operation that will get the state of a boolean variable, set it to true, and return the previously retrieved state.

What this does is this allows you to write code that continually tests to see if it can get the lock. Assume x is a shared variable between threads:

while(testandset(x));
// ...
// critical section
// this code can only be executed by once thread at a time
// ...
x = 0; // set x to 0, allow another process into critical section

Since the other threads continually test the lock until they're let into the critical section, this is a very inefficient way of guaranteeing mutual exclusion. However, using this simple concept, you can build more complicated control structures like semaphores that are much more efficient (because the processes aren't looping, they're sleeping)

Kyle Cronin
Thanks very interesting read.
Tim Matthews
Another such atomic instruction (and probably the most important) is http://en.wikipedia.org/wiki/Compare_and_swap.
J S
Note that the answer talks about spinlocks, which can be implemented in user code. Locks that have the blocked threads sleep are usually (always?) implemented using operating system primitives. Boost just provides a unified interface to those.
mghie
You're right, I should have made that distinction. You can have OS-level spinlocks too, but they're not a good idea.
Kyle Cronin
+3  A: 

The simplest explanation is that the locks, way down underneath, are based on a hardware instruction that is guaranteed to be atomic and can't clash between threads.

Ordinary local variables in a function are already specific to an individual thread. It's only statics, globals, or other data that can be simultaneously accessed by multiple threads that needs to have locks protecting it.

Larry Gritz
A: 

The mechanism that operates the lock controls access to it.

Any locking primitive needs to be able to communicate changes between processors, so it's usually implemented on top of bus operations, i.e., reading and writing to memory. It also needs to be structured such that two threads attempting to claim it won't corrupt its state. It's not easy, but you can usually trust that any OS implemented lock will not get corrupted by multiple threads.

MSN

MSN