views:

106

answers:

5

Hi,

Assume that the following code is being executed by 10 threads.

pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)

For purpose of explanations lets say the threads are T1, T2, T3.....T10. My requirement is that as long as T1 or T2 or T3( i.e any of T1, T2 or T3) is waiting for acquiring a lock, the other threads i.t T4, T5, T6.....T10 should not be able to acquire the lock i.e T1, T2 and T3 should have precedence in acquiring the lock with respect to other threads.

I guess it could be done by increasing the priority of threads T1, T2 and T3

i.e here is the pseudo code

if this thread is T1 or T2 or T3
increase its priority 
pthread_mutex_lock(&lock)
Some trivial code
pthread_mutex_unlock(&lock)
if this thread is T1 or T2 or T3 decrease it priority to normal

Please note that I want a solution that is works for Linux platform and should use pthreads. I don't really care about any other platform.

Also note that I don't really want to make these 3 threads as realtime, I want them to exhibit their defualt behaviour(scheduling and priority) except that in the above mentioned small piece of code I want them to always have precedence in acquiring lock.

I have read some man pages about scheduling policies and scheduling priorities in Linux but can't really make out :(

Will this work? Can you help me with the exact pthread API required to accomplish the above task?

Regards lali

+2  A: 

Alternatively you may just introduce another lock for higher priority threads. consider the following pseudo-code (i am not familiar with the pthread semantics, but i believe this is not hard to map the code to the needed calls)

EDIT (thanx JosephH)

introducing the exec semaphore set to 3 (number of high-prio threads) note that pend(exec,3); means that this pend will sleep until all 3 slots are available and will consume them all



//init
exec = semaphore(3,3);

//========================

if this is NOT thread (t1,t2,t3)
    lock(low_prio);
    sem_pend(exec,3);
else
    sem_pend(exec,1);
lock(high_prio);
//...
unlock(high_prio);
if this is NOT thread (t1,t2,t3)
    sem_release(exec,3);
    sleep(0); //yield();  //ensures that sem_pend(exec,1) is executed
    unlock(low_prio);
else
    sem_release(exec,1);
ULysses
Sorry, but I really don't see how that works. If both t1 and t4 try to take the lock whilst t2 has it, then t1 and t4 will both end up waiting at the lock(high_prio) and it will not be guaranteed which one will wake first.
JosephH
yepp, you are right, this i have missed. Still, i am sure there is a way, consider the edit..
ULysses
I don't think this would work either; two locks approach could work but you would have to give the 'high priority' threads a chance to overtake the 'low priority' ones (so it should be implemented as a two step lock).
Unreason
@Unreason have you considered the latest edit?
ULysses
@JosephH do you still think this code is invalid?
ULysses
I don't think using a sleep() is a good idea; it's probably not guaranteed to do what you want. Also being rather picky, semaphore() is not part of pthreads, the original poster explicitly said he wanted a pthread solution. (I've not had time to fully think through your solution yet.)
JosephH
@JosephH, yield or sleep is a precautionary mean. But i do agree that there is no `sem_wait()` implementation in `pthreads` that takes another parameter that tells how many slots to acquire. In this way my answer is not really useful.
ULysses
+1  A: 

As I understand it, the only way you can truly guarantee this would be to write a lock that works like that yourself.

You will need condition variables, and counts of the number of waiting low / high priority threads.

In terms of the concepts and APIs you'll need, it is relatively similar to implementing a read/write lock (but the semantics you need are completely different, obviously - but if you understood how the r/w lock is working, you'll understand how to implement what you want).

You can see an implementation of a read write lock here:

http://ptgmedia.pearsoncmg.com/images/0201633922/sourcecode/rwlock.c

In the lower priority threads, you'd need to wait for high priority threads to finish, in the same way readers wait for writers to finish.

(The book the above code is taken from it also a great posix threads book btw, http://www.informit.com/store/product.aspx?isbn=0201633922 )

JosephH
A: 

(The first two attempts had bugs, pls jump to EDIT2)

Maybe this would work?

if NOT this thread is T1 or T2 or T3
    pthread_mutex_lock(&lock1) // see note below
    pthread_mutex_lock(&lock2)
    Some trivial code
    pthread_mutex_unlock(&lock2)
    pthread_mutex_unlock(&lock1)
else
    pthread_mutex_lock(&lock2)
    Some trivial code
    pthread_mutex_unlock(&lock2)        
end if

Reasoning: Some threads will compete for two locks and therefore will have lower priority and some threads will compete for only one lock and therefore will have higher priority. Still the difference might be marginal and then the resolution would be to introduce some lag between acquiring first lock and attempting the second lock for the higher priority threads in which time the higher priority threads will be given a chance to get the lock2.
(disclaimer: I am a newbie when it comes to this)

EDIT: Another attempt/approach

if NOT (this thread is T1 or T2 or T3)  
    pthread_mutex_lock(&lock1)
    if pthread_mutex_trylock(&lock2) == 0  // low priority threads will not get queued
        Some trivial code
        pthread_mutex_unlock(&lock2)
    end if
    pthread_mutex_unlock(&lock1)
else 
    if (this thread is T1 or T2 or T3)
        pthread_mutex_lock(&lock2)
        Some trivial code
        pthread_mutex_unlock(&lock2)        
    end if
end if

EDIT2: Another attempt (trying to learn something here)

if NOT (this thread is T1 or T2 or T3)  
    pthread_mutex_lock(&lock1)
    while !(pthread_mutex_trylock(&lock2) == 0)
        pthread_yield()
    Some trivial code
    pthread_mutex_unlock(&lock2)
    pthread_mutex_unlock(&lock1)
else 
    if (this thread is T1 or T2 or T3)
        pthread_mutex_lock(&lock2)
        Some trivial code
        pthread_mutex_unlock(&lock2)        
    end if
end if
Unreason
i suspect that you wanted to test if a thread `is NOT t1 or t2 or t3` i.e. low priority. this approach has a type of flaw that my first version had: if hp thread is executing, then there might be two other hp and lp threads waiting on the lock2, and you can not tell which will acquire it first
ULysses
@ULysses, yes NOT was intended (edited it). As for the several threads waiting for lock2, I think you are right. I mention introducing the lag, but seems that is not enough to guarantee the priority and if I understand correctly your semaphore approach would guarantee it. I'll put another edit...
Unreason
now what you have is an lp thread fall-though in case an hp thread is executing
ULysses
@ULysses, huh, right... :)
Unreason
Unreason
nope, this way you have troubles with lock1 on the second iteration of the loop: you either deadlock on it, or always get error, or increment lock count, or even worse, have an undefined behavior. See http://opengroup.org/onlinepubs/007908775/xsh/pthread_mutex_lock.html
ULysses
@ULysses, does the same comment apply to the EDIT2?
Unreason
ok, it looks better now. I am not sure, but in such cases I believe there is a chance that the try_lock succeeds before an hp thread acquires lock2: consider hp thread executing, another hp thread waiting on lock2 and lp thread in a loop; now the first hp thread releases lock2 and at the same moment the lp thread executes try_lock and succeeds before the second hp thread. So I'd still refine the code, but i have no experience in pthread, maybe it hadles such situation correctly.
ULysses
The way I understand it (limited!:)) lp in the loop would not get a lock. I assume the try_lock will only get a lock if there are no threads that are blocked on a mutex. From the opengroup.org: *'If there are threads blocked on the mutex object referenced by mutex when pthread_mutex_unlock() is called, resulting in the mutex becoming available, the scheduling policy is used to determine which thread shall acquire the mutex.'* - I assume this means that for the threads of the same priority the thread can not get to it out of order; so if there was hp blocked on a mutex it would run before lp.
Unreason
Even your last method still busy-waits when the low priority thread is locked, which is not a good design.
caf
@caf, I see, basicaly it is not necessary to loop mindlessly because you can use signals to avoid looping.
Unreason
@Unreason: any MT code using any form of yield IMO is broken. caf posted probably simplest implementation for the case of two priorities. it is correct and is pretty much how that supposed to be done. do not try to reinvent wheels, square at that. (I posted a bit overkill solution for N priorities.)
Dummy00001
A: 

To implement that with pthreads you would need N lists, one per thread priority. The lists would contain pointers to the thread's pthread_cond_t variables.

Schematic untested meta-code:

/* the main lock */
pthread_mutex_t TheLock = PTHREAD_MUTEX_INITIALIZER;

/* service structures: prio lists and the lock for them */
pthread_mutex_t prio_list_guard = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t *prio_lists[MY_MAX_PRIO][MY_MAX_THREAD]; /* 0 == highest prio */

/* lock */
void
prio_lock(int myprio)
{
    pthread_cond_t x;

    pthread_mutex_lock( &prio_list_guard );

    if (0 == pthread_mutex_trylock( &TheLock )) {
        pthread_mutex_unlock( &prio_list_guard );
        return 0;
    }

    pthread_cond_init( &x, 0 );
    LIST_ADD( prio_lists[myprio], &x )

    while(1)    /* handle spurious wake-ups */
    {
        pthread_cond_wait( &prio_list_guard, &x );
        if (0 == pthread_mutex_trylock( &TheLock )) 
        {
            LIST_REMOVE( prio_lists[myprio], &x );
            pthread_mutex_unlock( &prio_list_guard );
            return 0;
        }
    }
}

/* unlock */
void
prio_unlock()
{
    int i;
    pthread_cond_t *p;

    pthread_mutex_lock( &prio_list_guard );

    for (i=0; i<MY_MAX_PRIO; i++)
    {
        if ((p = LIST_GETFIRST( prio_lists[i] )))
        {
            pthread_cond_signal( p );
            break;
        }
    }

    pthread_mutex_unlock( &TheLock );

    pthread_mutex_unlock( &prio_list_guard );
}

The code also handles spurious wake-ups from pthread_cond_wait(), but frankly I have never seen that happening.

Edit1. Note that prio_lists above is a primitive form of a priority queue.

Dummy00001
meta code shouldn't mean dirty code. in `prio_lock()` you lock the `prio_list_guard` twice, in the `prio_unlock()` you should first unlock `TheLock`, and then signal the list, otherwise you will always have a 'spurious' wake up in a cycle
ULysses
@ULysses: fixed. To the "first unlock TheLock" - you can do it, but that way it is IMO cleaner. And it is a matter of taste. `man pthread_cond_wait` for the implied locking of the conditional mutex. The locking thread would wake up only after the unlocking thread drops the prio_list_guard lock.
Dummy00001
@Dummy, OK, had no experience with the pthread lib. This code now looks fine to me.
ULysses
@ULysses: thanks for the review ;) This is very close to (and reuses ideas from) what I do in the production code.
Dummy00001
There's no need to use a condition variable per process - one per priority level is sufficient.
caf
@caf: thanks, interesting idea. would try to test it. the code in my case also attempts (and in my case that is desired) to provide FIFO guarantees, that's why the queue with pthread_cond_t per thread. Also my idea is to wake up only the first thread to avoid the avalanche(?) effect (when many threads are made runnable, but only one can really proceed). will try to test to see how it would work with one pthread_cond_t per priority.
Dummy00001
Take a look at my answer - I use `pthread_cond_signal()` to only wake one thread (though I don't provide FIFO guarantees, since the OP didn't ask for it). It's also possible to provide FIFO with only one condition variable, but you then have to use `pthread_cond_broadcast()`, and so it *can* be susceptible to the "thundering herd" problem (I've written an example here: http://stackoverflow.com/questions/3050083/linux-synchronization-with-fifo-waiting-queue/3050871#3050871 )
caf
+2  A: 

Here's my implementation. Low priority threads use prio_lock_low() and prio_unlock_low() to lock and unlock, high priority threads use prio_lock_high() and prio_unlock_high().

The design is quite simple. High priority threads are held at the critical section mutex ->cs_mutex, low priority threads are held at the condition variable. The condition variable mutex is only held around updates to the shared variable and signalling of the condition variable.

#include <pthread.h>

typedef struct prio_lock {
    pthread_cond_t cond;
    pthread_mutex_t cv_mutex; /* Condition variable mutex */
    pthread_mutex_t cs_mutex; /* Critical section mutex */
    unsigned long high_waiters;
} prio_lock_t;

#define PRIO_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }

void prio_lock_low(prio_lock_t *prio_lock)
{
    pthread_mutex_lock(&prio_lock->cv_mutex);
    while (prio_lock->high_waiters || pthread_mutex_trylock(&prio_lock->cs_mutex))
    {
        pthread_cond_wait(&prio_lock->cond, &prio_lock->cv_mutex);
    }
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}

void prio_unlock_low(prio_lock_t *prio_lock)
{
    pthread_mutex_unlock(&prio_lock->cs_mutex);

    pthread_mutex_lock(&prio_lock->cv_mutex);
    if (!prio_lock->high_waiters)
        pthread_cond_signal(&prio_lock->cond);
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}

void prio_lock_high(prio_lock_t *prio_lock)
{
    pthread_mutex_lock(&prio_lock->cv_mutex);
    prio_lock->high_waiters++;
    pthread_mutex_unlock(&prio_lock->cv_mutex);

    pthread_mutex_lock(&prio_lock->cs_mutex);
}

void prio_unlock_high(prio_lock_t *prio_lock)
{
    pthread_mutex_unlock(&prio_lock->cs_mutex);

    pthread_mutex_lock(&prio_lock->cv_mutex);
    prio_lock->high_waiters--;
    if (!prio_lock->high_waiters)
        pthread_cond_signal(&prio_lock->cond);
    pthread_mutex_unlock(&prio_lock->cv_mutex);
}
caf
this should work, but very verbose comparing to the same logic achieved with semaphore
ULysses
@ULysses: I suppose, but at least it just uses pthreads primitives and it'd be hidden in an implementation module - the code using it would simply do `prio_lock_*( /* ... */ prio_unlock_*(` around its critical section.
caf