views:

274

answers:

1

I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are:

  1. Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead.

  2. Cycling through multiple condition variables with a timed wait.

  3. Writing dummy bytes to files or pipes instead, and polling on those.

#1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad).

For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran.

Is there a better way to do this? Curious for answers appropriate for Solaris as well.

+2  A: 

If you are talking about POSIX threads I'd recommend to use single condition variable and number of event flags or something alike. The idea is to use peer condvar mutex to guard event notifications. You anyway need to check for event after cond_wait() exit. Here is my old enough code to illustrate this from my training (yes, I checked it runs but please note it was prepared some time ago and in a harry for newcomers).

#include <pthread.h>
#include <stdio.h>
#include <unistd.h>

static pthread_cond_t var;
static pthread_mutex_t mtx;

unsigned event_flags = 0;
#define FLAG_EVENT_1    1
#define FLAG_EVENT_2    2

void signal_1()
{
    pthread_mutex_lock(&mtx);
    event_flags |= FLAG_EVENT_1;
    pthread_cond_signal(&var);
    pthread_mutex_unlock(&mtx);
}

void signal_2()
{
    pthread_mutex_lock(&mtx);
    event_flags |= FLAG_EVENT_2;
    pthread_cond_signal(&var);
    pthread_mutex_unlock(&mtx);
}

void* handler(void*)
{
    // Mutex is unlocked only when we wait or process received events.
    pthread_mutex_lock(&mtx);

    // Here should be race-condition prevention in real code.

    while(1)
    {
        if (event_flags)
        {
            unsigned copy = event_flags;

            // We unlock mutex while we are processing received events.
            pthread_mutex_unlock(&mtx);

            if (event_flags & FLAG_EVENT_1)
            {
                printf("EVENT 1\n");
                event_flags ^= FLAG_EVENT_1;
            }

            if (event_flags & FLAG_EVENT_2)
            {
                printf("EVENT 2\n");
                event_flags ^= FLAG_EVENT_2;

                // And let EVENT 2 is signal to close.
                break;
            }
        }
        else
        {
            // Mutex is locked. It is unlocked while we are waiting.
            pthread_cond_wait(&var, &mtx);
            // Mutex is locked.
        }
    }

    // ... as we are dying.
    pthread_mutex_unlock(&mtx);
}

int main()
{
    pthread_mutex_init(&mtx, NULL);
    pthread_cond_init(&var, NULL);

    pthread_t id;
    pthread_create(&id, NULL, handler, NULL);
    sleep(1);

    signal_1();
    sleep(1);
    signal_1();
    sleep(1);
    signal_2();
    sleep(1);

    pthread_join(id, NULL);
    return 0;
}
Roman Nikitchenko
This is a sensible answer but unfortunately the semantics are different. If I poll on a file for example, and there are 10 bytes written to that file before I wake up, then when I wake up I discover that 10 bytes were written. Under this scheme, if an event happens ten times before I wake up, I only learn about the last one.
Joseph Garvin
You could try to extend the scheme so that instead of event flags there is a list of event flags, and the reader thread keeps track of where it is in the list, but that doesn't scale to multiple threads -- how do you know when you can delete elements of the list? In order for it to be fast you now need a lockless linked list and reference counted implementation. Probably still faster than waiting a scheduler quantum but far from ideal...
Joseph Garvin
Instead of copying event flags you can do any event list 'pop'. List is unchanged unless you release mtx (of course only if any modification is under the same mutex). This is one of biggest advantages of this scheme. You can for example use events queue and yes, this queue is protected. While 'reader' checks it and 'pops' anybody willing to 'push' will wait for short time. But please note you shall NOT process events under lock, only 'extract'.
Roman Nikitchenko