views:

77

answers:

2

I have two threads in a producer-consumer pattern. Code works, but then the consumer thread will get starved, and then the producer thread will get starved.

When working, program outputs:

Send Data...semValue = 1
Recv Data...semValue = 0
Send Data...semValue = 1
Recv Data...semValue = 0
Send Data...semValue = 1
Recv Data...semValue = 0

Then something changes and threads get starved, program outputs:

Send Data...semValue = 1
Send Data...semValue = 2
Send Data...semValue = 3
...
Send Data...semValue = 256
Send Data...semValue = 257
Send Data...semValue = 258
Recv Data...semValue = 257
Recv Data...semValue = 256
Recv Data...semValue = 255
...
Recv Data...semValue = 0
Send Data...semValue = 1
Recv Data...semValue = 0
Send Data...semValue = 1
Recv Data...semValue = 0

I know threads are scheduled by the OS, and can run at different rates and in random order. My question: When I do a YieldThread(calls pthread_yield), shouldn’t the Talker give Listener a chance to run? Why am I getting this bizarre scheduling?

Snippet of Code below. Thread class and Semaphore class are abstractions classes. I went ahead as stripped out the queue for data passing between the threads so I could eliminate that variable.

const int LOOP_FOREVER = 1;

class Listener : public Thread
{
   public:
      Listener(Semaphore* dataReadySemaphorePtr)
         : Thread("Listener"),
           dataReadySemaphorePtr(dataReadySemaphorePtr)
      {
         //Intentionally left blank.
      }

   private:
      void ThreadTask(void)
      {
         while(LOOP_FOREVER)
         {
            this->dataReadySemaphorePtr->Wait();
            printf("Recv Data...");
            YieldThread();
         }
      }

      Semaphore*  dataReadySemaphorePtr;
};


class Talker : public Thread
{
   public:
      Talker(Semaphore* dataReadySemaphorePtr)
         : Thread("Talker"),
           dataReadySemaphorePtr(dataReadySemaphorePtr)
      {
         //Intentionally left blank
      }

   private:
      void ThreadTask(void)
      {
         while(LOOP_FOREVER)
         {
            printf("Send Data...");
            this->dataReadySemaphorePtr->Post();
            YieldThread();
         }
      }

      Semaphore*  dataReadySemaphorePtr;
};


int main()
{
   Semaphore  dataReadySemaphore(0);

   Listener   listener(&dataReadySemaphore);
   Talker     talker(&dataReadySemaphore);

   listener.StartThread();
   talker.StartThread();

   while (LOOP_FOREVER); //Wait here so threads can run
}
+1  A: 

No. Unless you are using a lock to prevent it, even if one thread yields it's quantum, there's no requirement that the other thread receives the next quantum.

In a multithreaded environment, you can never ever ever make assumptions about how processor time is going to be scheduled; if you need to enforce correct behavior, use a lock.

Billy ONeal
What if I have a Ping-Pong type memory, and I want to have the Talker write to Memory A, while Listener Read from Memory B at the same time. How can I use a lock to force this type behavior? Every lock I can think of will stop one thread from running.
Dennis Miller
Its my understanding that if you invoke pthread_yield(), the thread gives up the CPU and places itself at the end of the "ready to run" queue. So if the OTHER thread was on the "ready to run" then using pthread_yield() would effectively give the other thread a chance to run before this thread runs again.
John Rocha
@Dennis: Yes, you are going to have to stop readers while writers are working on the same piece of memory. While the memory is being written to, it is in an inconsistent state -- it's possible for readers to see a half updated chunk of memory. @John: That may be the case. That said, even if it is the case, it is an implementation detail. One cannot rely on that for correct program behavior.
Billy ONeal
I'm not reading and writing from the same piece of memory. I essentially have two memory spots. One thread reads from Memory A, while the other thread writes to Memory B. After one cycle the roles reverse. This is a simplification, because there is more than 2 spots in memory, its a queue of memory spots.
Dennis Miller
@Dennis: If you are implementing a specific data structure, such as a queue, there *are* lock free implementations available. But care must be taken in such cases far beyond that taken in most queue implementations. You would probably need two queues one A->B and one B->A. I still think a lock is the right solution here -- thread A can insert a few items in the queue, release the lock, and keep going. When Thread B has finished, it can release the lock, and thread A can write in anything it finished... etc. Batching it up will allow things to be effectively concurrent even with locks.
Billy ONeal
You were right in the end, it was a lock that fixed the problems. I experimented with changing the thread priorities and schedule policy. The policy and priorities helped a lot, but it was a lock that guaranteed the success.
Dennis Miller
+1  A: 

Believe it or not, it runs that way because it's more efficient. Every time the processor switches between threads, it performs a context switch that wastes a certain amount of time. My advice is to let it go unless you have another requirement like a maximum latency or queue size, in which case you need another semaphore for "ready for more data" in addition to your "data ready for listening" one.

Karl Bielefeldt
Probably should have gave a little background on my problem. I do have latency and queue size constraints. The talker in the actual code is hooked up to a sensor than spews data continuously, and the listener takes that data and sends it to another piece of equipment after injecting a few other bits of data. I can't drop bytes once the sensor starts sending data. It's why I was trying to have a queue, so that the sensor could write to one part of memory while the listener can read from another part.
Dennis Miller
Ah, that completely changes the question, and threads aren't really the answer. Usually that sort of scenario is handled by an interrupt. The listener routine gets all the CPU time except for short bursts when there is actually data available from the sensor, then the talker takes over, quickly reads it in, and returns control to the listener. If you don't have a physical interrupt pin available to trigger an ISR, you can use a timer to the same effect.
Karl Bielefeldt
Yep, I have an interrupt and it triggers when there is enough bytes on the buffer to DMA across. The problem is that when I do a thread_yield, the Talker never executes before the Listener triggers again. Yet sometimes they perform perfectly in sync. I even tried messing with thread scheduling, each thread using a SCHED_FIFO policy, and making the Talker thread be at a higher priority, and still it seems the Talker thread gets starved for the CPU. I even tried taking all operations out of the talker thread so it basically does nothing, and I still get the same results.
Dennis Miller
I'm confused. Are you using the Talker thread to do the DMA, or to do something after the DMA completes? The point is you don't want the Talker function in a thread at all. You want it directly called by the interrupt, or by the DMA completing.
Karl Bielefeldt
The talker does something after the DMA completes. The driver handles the interrupt to start the DMA operation. I say DMA, but it's not a true DMA, it only DMA's from sensor memory to main system memory(my memory).Update: after playing around with the priorities, it's starting to work now. I had the priorities in reverse.
Dennis Miller