views:

164

answers:

3

This may sound like a stupid question, but if one locks a resource in a multi-threaded app, then the operation that happens on the resource, is that done atomically?

I.E.: can the processor be interrupted or can a context switch occur while that resource has a lock on it? If it does, then nothing else can access this resource until it's scheduled back in to finish off it's process. Sounds like an expensive operation.

+12  A: 

The processor can very definitely still switch to another thread, yes. Indeed, in most modern computers there can be multiple threads running simultaneously anyway. The locking just makes sure that no other thread can acquire the same lock, so you can make sure that an operation on that resource is atomic in terms of that resource. Code using other resources can operate completely independently.

You should usually lock for short operations wherever possible. You can also choose the granularity of locks... for example, if you have two independent variables in a shared object, you could use two separate locks to protect access to those variables. That will potentially provide better concurrency - but at the same time, more locks means more complexity and more potential for deadlock. There's always a balancing act when it comes to concurrency.

Jon Skeet
so then if another thread is waiting for that resource, it just has to keep waiting?
Tony
@Tony - yep, it will block waiting to acquire the lock until it is released by the first thread
Paolo
Well, of course. That's what one wants from a lock.
botismarius
If another thread is waiting for the resource by waiting on the lock, then yes, it has to keep waiting. Of course, it's still possible to break your code by using the object that should be protected by the lock in an unprotected manner (e.g., if the same protected object is used in two places, potentially simultaneously, but different locks are acquired to protect the object.)
Greg D
+1 However, "more locks means more complexity", mean also higher cost of overall locking and synchronisation.
mloskot
@mloskot: No, more locks usually mean a *lower* cost of locking/synchronization, because you can be more fine-grained. Imagine a program which only had a single lock to cover all shared state - there'd be a *vast* amount of waste there, compared with using lots of locks, where different threads can each "own" a different lock at a different time. "More locks" doesn't mean "more locking."
Jon Skeet
@Jon Skeet - I agree on the property of fine-grained but the "usually" is a bit disturbing. Certainly, single lock is a degenerate idea as well. I was more referring to lock in its most approachable form of mutex. Lock/free on mutex may be expensive. Atomic operations are way cheaper. Spreading infinite number of mutexes may decrease performance significantly. So, detailed analysis of a problem is necessary. No single solution that fit well to all problems.
mloskot
@mloskot: The "usually" was in the context of comparing "chunky" locks with "fine" locks - *not* with lock-free atomic changes. Note that obtaining an uncontested lock can be very quick indeed (e.g. in Java or .NET) - and of course you're more likely to be uncontested when the locks are fine-grained.
Jon Skeet
@Jon Skeet +1 and understood, thanks for clarification.
mloskot
+7  A: 

You're exactly right. That's one reason why it's so important to lock for short period of time. However, this isn't as bad as it sounds because no other thread that's waiting on the lock will get scheduled until the thread holding the lock releases it.

dsimcha
"That's one reason why it's so important to lock for short period of time" ?????? BUT a locked section of a thread CAN BE DEFINITELY interrupted by another thread. To be precise, by any thread that does not use the same lock.
ulrichb
+2  A: 

Yes, a context switch can definitely occur. This is exactly why when accessing a shared resource it is important to lock it from another thread as well. When thread A has the lock, thread B cannot access the code locked.

For example if two threads run the following code:

1. lock(l);
2. -- change shared resource S here --
3. unlock(l);

A context switch can occur after step 1, but the other thread cannot hold the lock at that time, and therefore, cannot change the shared resource. If access to the shared resource on one of the threads is done without a lock - bad things can happen!

Regarding the wastefulness, yes, it is a wasteful method. This is why there are methods that try to avoid locks altogether. These methods are called lock-free, and some of them are based on strong locking services such as CAS (Compare-And-Swap) or others.

Anna