views:

1096

answers:

4

We are designing a p2p applications using c++ which transmits voice to other peer using UDP.

We are capturing a mic signal in a buffer in the thread which captures voice for one second in the while loop. For every second voice captured in buffer it splits it into packets and sends to the other peer. Now I need a proper data structure at the destination which is copes real time transmission. Same data structure I am going to use for screen capturing. Here are two approaches using queue I thought of

  • Implementing a queue using a linked list which maintains a queue of OneSecVoice objects or Image object in case of image.

  • Implementing a queue using static array of OneSecVoice or Image objects

OneSecVoice/Image objects will contain a total number of packets, packets buffer for that particular Image/OneSecVoice.

As its a real time one thread will continuously scan for queue and take out latest complete Image/OneSecVoice by popping the Image/OneSecVoice from queue.

So which to chose Implementing a queue using a linked list or Implementing a queue using static array.

Me and my friend are having fight over this so we decided to post here.

+3  A: 

Don't implement either. Use pre-existing implementations in the standard library:

std::queue<T, std::list<T> >
std::queue<T, std::deque<T> > // uses deque by default, by the way

You can typedef these to make swapping the two very easy:

template <typename T>
struct queue_list
{
    typedef typename std::queue<T, std::list<T> > value_type;
}

template <typename T>
struct queue_array
{
    typedef typename std::queue<T, std::deque<T> > value_type;
}

typedef queue_list<the_type>::value_type container_type; // use one
typedef queue_array<the_type>::value_type container_type;

Now profile and find which is better. Likely the array will have better performance, for cache.

You can use boost's pool allocator to try to get the benefit of a list's quick insertion and removal, along with an array's cache performance:

typedef typename std::queue<T, std::list<T, boost::pool_allocator<T> > > value_type;

Another structure to try is boost::circular_buffer, as suggested by fnieto:

template <typename T>
struct queue_buffer
{
    typedef typename std::queue<T, boost::circular_buffer<T> > value_type;
}
GMan
+1  A: 

If the only operation on the receiving side is to pop off of the queue, I don't really see the point of using a static array, which would generally be useful if you needed to operate on chunks of continuous data or for random access.

I don't think you're going to get any space savings from using a static array. Sure, you're saving a pointer per entry, but at the cost of allocating a large fixed block of memory. And if your queue is going to get that large sometimes, then maybe you need the flexibility afforded by a linked list, since it can grow to an arbitrary size.

If you want to limit the size it can grow to, you can do that in either scheme.

Drew Hoskins
+4  A: 

I would use boost::circular_buffer. You will get the cache benefits having a fixed memory area and no unexpected memory allocations.

In order to achieve maximum efficiency, the circular_buffer stores its elements in a contiguous region of memory, which then enables:

  1. Use of fixed memory and no implicit or unexpected memory allocation.
  2. Fast constant-time insertion and removal of elements from the front and back.
  3. Fast constant-time random access of elements.
  4. Suitability for real-time and performance critical applications.

Possible applications of the circular_buffer include:

  • Storage of the most recently received samples, overwriting the oldest as new samples arrive.
  • As an underlying container for a bounded buffer (see the Bounded Buffer Example).
  • A kind of cache storing a specified number of last inserted elements.
  • Efficient fixed capacity FIFO (First In, First Out) or LIFO (Last In, First Out) queue which removes the oldest (inserted as first) elements when full.
fnieto
The cache benefits are going to be negligible, assuming his data structure is at least as big as a cache line. Since each element is going to be popped off and processed once, and then overwritten, the cache isn't going to help much.
Drew Hoskins
@drew, look into this link http://en.wikipedia.org/wiki/Locality_of_reference.
fnieto
@xinus, you should be very careful with unnecessary copies.
fnieto
A: 

Linked list will be the canonical approach, "Implementing a queue using static array" is actually how I would go about it - so as to reassemble the udp packets, then probably pass the sequenced data to the LL so it gets ordered. How are you going to dance the "continuously scan for queue and take out latest complete" as you cannot just stuff it in there as udp may come in unanticipated sequence. Latest complete does not mean that "Coffee" comes after "Beans type:" so you could get some confused beans in there somewhere.

Nicholas Jordan