tags:

views:

58

answers:

3

I am having a problem with mutexes (pthread_mutex on Linux) where if a thread locks a mutex right again after unlocking it, another thread is not very successful getting a lock. I've attached test code where one mutex is created, along with two threads that in an endless loop lock the mutex, sleep for a while and unlock it again.

The output I expect to see is "alive" messages from both threads, one from each (e.g. 121212121212. However what I get is that one threads gets the majority of locks (e.g. 111111222222222111111111 or just 1111111111111...).

If I add a usleep(1) after the unlocking, everything works as expected. Apparently when the thread goes to SLEEP the other thread gets its lock - however this is not the way I was expecting it, as the other thread has already called pthread_mutex_lock. I suspect this is the way this is implemented, in that the actice thread has priority, however it causes certain problem in this particular testcase. Is there any way to prevent it (short of adding a deliberately large enough delay or some kind of signaling) or where is my error in understanding?

#include <pthread.h>
#include <stdio.h>

#include <string.h>
#include <sys/time.h>
#include <unistd.h>

pthread_mutex_t mutex;

void* threadFunction(void *id) {
 int count=0;

 while(true) {
  pthread_mutex_lock(&mutex);
  usleep(50*1000);
  pthread_mutex_unlock(&mutex);
  // usleep(1);

  ++count;
  if (count % 10 == 0) {
   printf("Thread %d alive\n", *(int*)id);
   count = 0;
  }
 }

 return 0;
}

int main() {
 // create one mutex
 pthread_mutexattr_t attr;
 pthread_mutexattr_init(&attr);
 pthread_mutex_init(&mutex, &attr);

 // create two threads
 pthread_t thread1;
 pthread_t thread2;

 pthread_attr_t attributes;
 pthread_attr_init(&attributes);

 int id1 = 1, id2 = 2;
 pthread_create(&thread1, &attributes, &threadFunction, &id1);
 pthread_create(&thread2, &attributes, &threadFunction, &id2);

 pthread_attr_destroy(&attributes);

 sleep(1000);
 return 0;
}
+1  A: 

You misunderstand the way that mutexes work (at least under your particular implementation). The release of a mutex doesn't automatically swap to another thread that's waiting for it.

Generally, threads keep running until either they have to wait for a resource or they use up their quantum (time slice).

Where there is no resource contention and all threads have the same priority, the fairest scheduling algorithm is to give each an equal time slice before swapping. That's because the swap operation itself takes some time so you don't want to be doing it too often (relative to the real work being done by the threads.

If you want to alternate between threads, you need something more deterministic than mutexes, such as a set of condition variables:

paxdiablo
I was hoping the other thread's lock-call would have been registered and causing the currently running thread's lock to not succeed (i.e. queued mutexes). As this is a contrived test case and not actually happening in my app, I will leave it at that but keep your answer in mind should this issue come up again.
Daniel
A: 

This isn't deadlock, it isn't even livelock. It's merely a case of lacking fairness. If this is critical for you, you should use primitives that ensure non-starvation, e.g. a queueing mutex.

Kilian Foth
A: 

When the first thread unlocks the mutex, there will of course be some delay before that change is available to the other thread. This likely is longer than it takes the first thread to re-lock the mutex, since it doesn't have to wait this time.

DeadMG