views:

2911

answers:

3
A: 

I'd go for multiple single-writer queues (one per writer thread). Then you can check this for how to get the single reader to read the various queues.

Bahbar
Unfortunately, that's not really a possibility, since many threads are spawning, completing their tasks, and then writing the results to a queue, processed by a single database worker thread. Thus, each queue would only get written to once, defeating the whole purpose.
Edward Amsden
ok. So significant work per thread, with a single write to the queue at the end ? So the queue is not really a performance concern then. How about simply protecting the push method with a mutex ?
Bahbar
+1  A: 

If you dont need a lock free queue, then you could just wrap up an existing queue with a lock.

Mutex myQueueLock;
Queue myQueue; 
void mtQueuePush(int value)
{
    lock(myQueueLock);
    queuePush(myQueue, value);
    unlock(myQueueLock);
}
int mtQueueNext()
{
    lock(myQueueLock);
    int value = queueFront(myQueue);
    queuePop(myQueue);
    unlock(myQueueLock);
    return value;
}

The only thing after that is to add some sort of handeling for mtQueueNext when the queue is empty.

EDIT: If you have a single reader, single writer lockless queue, you only need to have a lock around mtQueuePush, to prevent multiple simultaneous writers.

Theres a nubmer of single reader/writer lockless queues around, howver most of them are implemented as c++ template classes. However do a google search and if need be work out how to rewrite them in plain C.

Fire Lancer
That is a possibility, though I'd like it if, in the case where multiple items are in the queue, the reader could pop() the front while a writer push()-es on the back. There shouldn't be a need for mutual exclusion in that case.
Edward Amsden
Well thers a number of lockless queues around, I'll see if I can find one...
Fire Lancer
Which is to say that, while I'm not against using a mutex in the design, I'd rather not wrap the whole damn thing in one.
Edward Amsden
+2  A: 

Sure, there are lockless queues. Based on what you've said in comments, though, performance here is not at all critical, since you're creating a thread per write anyway.

So, this is a standard use case for a condition variable. Make yourself a struct containing a mutex, a condition variable, a linked list (or circular buffer if you like), and a cancel flag:

write:
    lock the mutex
    (optionally - check the cancel flag to prevent leaks of stuff on the list)
    add the event to the list
    signal the condition variable
    unlock the mutex

read:
   lock the mutex
   while (list is empty AND cancel is false):
       wait on the condition variable with the mutex
   if cancel is false:  // or "if list non-empty", depending on cancel semantics
       remove an event from the list
   unlock the mutex
   return event if we have one, else NULL meaning "cancelled"

cancel:
   lock the mutex
   set the cancel flag
   (optionally - dispose of anything on the list, since the reader will quit)
   signal the condition variable
   unlock the mutex

If you're using a list with external nodes, then you might want to allocate the memory outside the mutex lock, just to reduce the time its held for. But if you design the events with an intrusive list node that's probably easiest.

Edit: you can also support multiple readers (with no portable guarantees for which one gets a given event) if in cancel you change the "signal" to "broadcast". Although you don't need it, it doesn't really cost anything either.

Steve Jessop
One problem I can see with that:When I decide to shut down the application, I'm going to need to stop the reader thread that is waiting on the condition variable. How would I go about doing that?
Edward Amsden
I'll update to show you.
Steve Jessop