views:

3173

answers:

4

I have client and server programs which now communicate via TCP. I'm trying out using POSIX message queues instead (in cases where the client and server are on the same machine, of course). My hope is that it will improve performance (specifically via reduced latency).

I've worked out most of it, but am not sure about one thing: how to establish the "connection." The server accepts connections from multiple clients concurrently, so I'm tempted to emulate the TCP connection establish process like so:

  1. Server opens a queue with a well-known name and reads from it continuously (it can use select(2) as with TCP).
  2. Client opens three queues: two with arbitrary names (including some uniqueness such as PID to avoid collisions), and one with the well-known name used by the server.
  3. Client posts a "connect" message to the server's queue, including the client's queue names (one is designated for client-to-server traffic and the other for the converse).
  4. Server opens the queues named in the client's connect message and begins to read (select) from the client-to-server one.
  5. Client closes the server queue with the well-known name. Two-way communication proceeds using the two queues named by the client (one for each direction).

You can probably see how this scheme is similar to the common TCP method, and that's no accident. However, I'd like to know:

  1. Can you think of a better way to do it?
  2. Do you see any potential problems with my method?
  3. Do you have any other thoughts, including about the likelihood that using message queues instead of TCP on the same machine will actually improve performance (latency)?

Keep in mind that I haven't used POSIX message queues before (I did use IBM WebSphere MQ a while back, but that's rather different). The platform is Linux.

+2  A: 
  1. Can you think of a better way to do it?

    Perhaps have a look at fifos (aka named pipes). They are like network sockets but for the local machine. They are uni-directional so you might need to create two, one for each direction. Your question does lack any reason of why you are making this change specifically. There is nothing wrong with using sockets for process to process communication. They are bi-directional, efficient, widely supported and do give you the freedom to separate the processes between machines later.

  2. Do you see any potential problems with my method?

    System V message queues and fifo named pipes are both absolutely fine. Fifo pipes are like regular pipes so you can read() and write() with minimal code changes. System V message queues require putting the data into a structure and invoking msgsnd(). Either approach would be fine however.

  3. Do you have any other thoughts, including about the likelihood that using message queues instead of TCP on the same machine will actually improve performance (latency)?

    My other thoughts are that as you said, you need to develop a technique so each client has a unique identifier. One approach would be to add the pid to the structure you pass across or to negotiate a unique id with the parent / master at the beginning. The other thing to note is that the benefit of System V message queues are that you listen for "selective" messages so you could ideally use one queue from the server to all the clients, with each client waiting for a different message.

    I have no idea about which technique gives you the most optimal throughput in your software. It really might not be worth using System V message queues but only you can make that decision.

Philluminati

Philluminati
You say my "question does lack any reason of why you are making this change." I did mention it: performance (latency).Are System V message queues different in functionality from POSIX ones? The POSIX ones I'm using (e.g. mq_open(3)) don't seem to support the "selective" messages you mention.
John Zwinck
I've now tried using named pipes. They're slower (at least for passing a lot of small messages) than POSIX message queues.
John Zwinck
+1  A: 

I ended up implementing it basically as I described, with a few enhancements:

  • In step 2, I used GUIDs for the queue names instead of incorporating the client's PID.
  • In step 4, I added the sending of an "accept" message from server to client.
  • When either side wishes to end communication, it sends a "disconnect" message.

The handshaking is simpler than TCP, but seems sufficient.

As for latency: it's much better. Roughly 75% less latency using POSIX message queues instead of TCP on the same machine. My messages are on the order of 100 bytes each.

John Zwinck
A: 

How did you do this when select() doesn't work on message queues? Whats it Sys V or POSIX? Why take the extra effort in creating GUID to PID lookup table when PID is guaranteed to be unique, and is smaller storage (integer)?

/blee/

This should really be a comment on my answer. Anyway:Select does work with POSIX message queues, which are what I am using.There is no GUID to PID lookup table--the GUID is communicated between the processes and the PID is never used. PIDs are not unique--they're recycled on most systems.
John Zwinck
A: 

I've met similar issue, I develop real-time application and need IPC technique with similar to sockets functionality and minimal latency.

Have you compared your POSIX-MQ based solution with UNIX local sockets or TCP sockets only?

Thanks

Raydan
I compared them to TCP (on the same machine, of course) and to named pipes (fifos). MQ was faster than both of those. I didn't compare Unix domain sockets, but would surely be interested to hear the results when you do!
John Zwinck