views:

291

answers:

7

Hello,

I'm working on an image processing application where I have two threads on top of my main thread:

1 - CameraThread that captures images from the webcam and writes them into a buffer

2 - ImageProcessingThread that takes the latest image from that buffer for filtering.

The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image.

My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa.

What sort of locking mechanism would be the most appropriate for this to be thread-safe ?

I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering.

Thanks in advance for any idea or advice.

J.

+1  A: 

You will have to run some performance metrics, but take a look at lock free queues.

See this question and its associated answers, for example.

In your particular application, though, you processor is only really interested in the most recent image. In effect this means you only really want to maintain a queue of two items (the new item and the previous item) so that there is no contention between reading and writing. You could, for example, have your producer remove old entries from the queue once a new one is written.

Edit: having said all this, I think there is a lot of merit in what is said in Mitchel Sellers's answer.

Paul Ruane
+1: immutable queues are very easy to write and ideal for these types of problems :)
Juliet
What type of queues are you referring to ? System.Collections.Generics.Queue ?
Jelly Amma
@Jelly: __No__, the Collections.Generics.Queue is _not_ thread-safe.
Henk Holterman
@Jelly: lock-free queues, as discussed in the linked question and answers.
Paul Ruane
+2  A: 

Personally I think you might want to look at a different approach for this, rather than writing to a centralized "buffer" that you have to manage access to, could you switch to an approach that uses events. Once the camera thread has "received" an image it could raise an event, that passed the image data off to the process that actually handles the image processing.

An alternative would be to use a Queue, which the queue is a FIFO (First in First Out) data structure, now it is not thread-safe for access so you would have to lock it, but your locking time would be very minimal to put the item in the queue. There are also other Queue classes out there that are thread-safe that you could use.

Using your approach there are a number of issues that you would have to contend with. Blocking as you are accessing the array, limitations as to what happens after you run out of available array slots, blocking, etc..

Mitchel Sellers
A System.Collections.Generics.Queue<T> is not inherently threadsafe. If you have more than 1 thread accessing it, you still need to lock it.
JMarsch
@JMarsch - Thanks for catching that, i was thinking of a Threadsafe queue object that I use all the time. I updated the posting
Mitchel Sellers
@Mitchel Sellers: No worries. The .net 4 stuff is cool!
JMarsch
A: 

I would look at using a ReaderWriterLockSlim which allows fast read and upgradable locks for writes.

Firestrand
Depending upon usage. Lock aquistion is slow compared to a regular Monitor lock, and so it will only have a benefit if the writes are less frequent than the reads or if the reads take a bit of time (i.e. the parallelisation of the reads makes up for the loss of acquisition). If it takes while to write then the readers will still be blocked for this period.
Paul Ruane
@Paul Ruane, true probably not the best solution for this problem.
Firestrand
+5  A: 

This could be a textbook example of the Producer-Consumer Pattern.

If you're going to be working in .NET 4, you can use the IProducerConsumerCollection<T> and associated concrete classes to provide your functionality.

If not, have a read of this article for more information on the pattern, and this question for guidance in writing your own thread-safe implementation of a blocking First-In First-Out structure.

Programming Hero
+2  A: 

Given the amount of precessing needed for a picture, I don't think that a simple locking scheme would be your bottleneck. Measure before you start wasting time on the wrong problem.
Be very careful with 'lock-free' solutions, they are always more complicated than they look.

And you need a Queue, not an array.
If you can use dotNET4 I would use the ConcurrentQuue.

Henk Holterman
A: 

This isn't a direct answer to your question, but it may be better to rethink your concurrency model. Locks are a terrible way to syncronize anything -- too low level, error prone, etc. Try to rethink your problem in terms of message passing concurrency:

The idea here is that each thread is its own tightly contained message loop, and each thread has a "mailbox" for sending and receiving messages -- we're going to use the term MailboxThread to distinguish these types of objects from plain jane threads.

So instead of having two threads accessing the same buffer, you instead have two MailboxThreads sending and receiving messages between one another (pseudocode):

let filter =
    while true
        let image = getNextMsg() // blocks until the next message is recieved
        process image

let camera(filterMailbox) =
    while true
        let image = takePicture()
        filterMailbox.SendMsg(image) // sends a message asyncronous

let filterMailbox = Mailbox.Start(filter)
let cameraMailbox = Mailbox.Start(camera(filterMailbox))

Now you're processing threads don't know or care about any buffers at all. They just wait for messages and process them whenever they're available. If you send to many message for the filterMailbox to handle, those messages get enqueued to be processed later.

The hard part here is actually implementing your MailboxThread object. Although it requires some creativity to get right, its wholly possible to implement these types of objects so that they only hold a thread open while processing a message, and release the executing thread back to the thread-pool when there are no messages left to handle (this implementation allows you to terminate your application without dangling threads).

The advantage here is how threads send and receive messages without worrying about locking or syncronization. Behind the scenes, you need to lock your message queue between enqueing or dequeuing a message, but that implementation detail is completely transparent to your client-side code.

Juliet
A: 

Just an Idea.

Since we're talking about only two threads, we can make some assumptions.

Lets use your tripple buffer idea. Assuming there is only 1 writer and 1 reader thread, we can toss a "flag" back-and-forth in the form of an integer. Both threads will continuously spin but update their buffers. WARNING: This will only work for 1 reader thread

Pseudo Code

Shared Variables:

int Status = 0; //0 = ready to write; 1 = ready to read

Buffer1 = New bytes[]

Buffer2 = New bytes[]

Buffer3 = New bytes[]

BufferTmp = null

thread1 {

while(true)
{
    WriteData(Buffer1);
    if (Status == 0)
    {
        BufferTmp = Buffer1;
        Buffer1 = Buffer2;
        Buffer2 = BufferTmp;
        Status = 1;
    }
}

}

thread2 {

while(true)
{
    ReadData(Buffer3);
    if (Status == 1)
    {
        BufferTmp = Buffer1;
        Buffer2 = Buffer3;
        Buffer3 = BufferTmp;
        Status = 0;
    }
}

}

just remember, you're writedata method wouldn't create new byte objects, but update the current one. Creating new objects is expensive.

Also, you may want a thread.sleep(1) in an ELSE statement to accompany the IF statements, otherwise one a single core CPU, a spinning thread will increase the latency before the other thread gets scheduled. eg. The write thread may run spin 2-3 times before the read thread gets scheduled, because the schedulers sees the write thread doing "work"

Bengie