views:

680

answers:

5

We are still in the design-phase of our project but we are thinking of having three separate processes on an embedded Linux kernel. One of the processes with be a communications module which handles all communications to and from the device through various mediums.

The other two processes will need to be able to send/receive messages through the communication process. I am trying to evaluate the IPC techniques that Linux provides; the message the other processes will be sending will vary in size, from debug logs to streaming media at ~5 Mbit rate. Also, the media could be streaming in and out simultaneously.

Which IPC technique would you suggestion for this application? http://en.wikipedia.org/wiki/Inter-process_communication

Processor is running around 400-500 Mhz if that changes anything. Does not need to be cross-platform, Linux only is fine. Implementation in C or C++ is required.

Thanks for any suggestions.

+5  A: 

I would go for Unix Domain Sockets: less overhead than IP sockets (i.e. no inter-machine comms) but same convenience otherwise.

jldupont
+2  A: 

For a review of "classic" Linux IPC mechanisms: see here.

axel_c
+3  A: 

When selecting your IPC you should consider causes for performance differences including transfer buffer sizes, data transfer mechanisms, memory allocation schemes, locking mechanism implementations, and even code complexity.

Of the available IPC mechanisms, the choice for performance often comes down to Unix domain sockets or named pipes (FIFOs). I read a paper on Performance Analysis of Various Mechanisms for Inter-process Communication that indicates Unix domain sockets for IPC may provide the best performance. I have seen conflicting results elsewhere which indicate pipes may be better.

When sending small amounts of data, I prefer named pipes (FIFOs) for their simplicity. This requires a pair of named pipes for bi-directional communication. Unix domain sockets take a bit more overhead to setup (socket creation, initialization and connection), but are more flexible and may offer better performance (higher throughput).

You may need to run some benchmarks for your specific application/environment to determine what will work best for you. From the description provided, it sounds like Unix domain sockets may be the best fit.


Beej's Guide to Unix IPC is good for getting started with Linux/Unix IPC.

jschmier
A: 

If performance really becomes a problem you can use shared memory - but it's a lot more complicated than the other methods - you'll need a signalling mechanism to signal that data is ready (semaphore etc) as well as locks to prevent concurrent access to structures while they're being modified.

The upside is that you can transfer a lot of data without having to copy it in memory, which will definitely improve performance in some cases.

Perhaps there are usable libraries which provide higher level primitives via shared memory.

Shared memory is generally obtained by mmaping the same file using MAP_SHARED (which can be on a tmpfs if you don't want it persisted); a lot of apps also use System V shared memory (IMHO for stupid historical reasons; it's a much less nice interface to the same thing)

MarkR
A: 

Can't believe nobody has mentioned dbus.

http://www.freedesktop.org/wiki/Software/dbus

http://en.wikipedia.org/wiki/D-Bus

Might be a bit over the top if your application is architecturally simple, in which case - in a controlled embedded environment where performance is crucial - you can't beat shared memory.

Dipstick