views:

960

answers:

6

Hello,

I have a Java app, connecting through TCP socket to a "server" developed in C/C++.

both app & server are running on the same machine, a Solaris box (but we're considering migrating to Linux eventually). type of data exchanged is simple messages (login, login ACK, then client asks for something, server replies). each message is around 300 bytes long.

Currently we're using Sockets, and all is OK, however I'm looking for a faster way to exchange data (lower latency), using IPC methods.

I've been researching the net and came up with references to the following technologies:

  • shared memory
  • pipes
  • queues
  • as well as what's referred as DMA (Direct Memory Access)

but I couldn't find proper analysis of their respective performances, neither how to implement them in both JAVA and C/C++ (so that they can talk to each other), except maybe pipes that I could imagine how to do.

can anyone comment about performances & feasibility of each method in this context ? any pointer / link to useful implementation information ?


EDIT / UPDATE

following the comment & answers I got here, I found info about Unix Domain Sockets, which seem to be built just over pipes, and would save me the whole TCP stack. it's platform specific, so I plan on testing it with JNI or either juds or junixsocket.

next possible steps would be direct implementation of pipes, then shared memory, although I've been warned of the extra level of complexity...


thanks for your help

+1  A: 

In my former company we used to work with this project, http://remotetea.sourceforge.net/, very easy to understand and integrate.

Seffi
+1  A: 

I don't know much about native inter-process communication, but I would guess that you need to communicate using native code, which you can access using JNI mechanisms. So, from Java you would call a native function that talks to the other process.

fish
+1 for JNI. works quite well.
Jack
+4  A: 

If you ever consider using native access (since both your application and the "server" are on the same machine), consider JNA, it has less boilerplate code for you to deal with.

Bakkal
+3  A: 

DMA is a method by which hardware devices can access physical RAM without interrupting the CPU. E.g. a common example is a harddisk controller which can copy bytes straight from disk to RAM. As such it's not applicable to IPC.

Shared memory and pipes are both supported directly by modern OSes. As such, they're quite fast. Queues are typically abstractions, e.g. implemented on top of sockets, pipes and/or shared memory. This may look like a slower mechanism, but the alternative is that you create such an abstraction.

MSalters
for DMA, why is that then that I can read a lot of things related to RDMA (as Remote Direct Memory Access) that would apply across the network (especially with InfiniBand) and do this same thing. I'm actually trying to achieve the equivalent WITHOUT the network (as all is on the same box).
Bastien
RDMA is the same concept: copying bytes across a network without interrupting CPUs on either side. It still doesn't operate at the process level.
MSalters
+1  A: 

Here's a project containing performance tests for various IPC transports:

http://github.com/rigtorp/ipc-bench

sustrik
It doesn't include the 'Java factor', but it does look interesting.
pst
A: 

Have you considered keeping the sockets open, so the connections can be reused?

Thorbjørn Ravn Andersen
the sockets do stay open. the connection is alive for the whole time the application is running (around 7 hours). messages are exchanged more or less continuously (let's say around 5 to 10 per second). current latency is around 200 microseconds, the goal is to shave 1 or 2 orders of magnitude.
Bastien
A 2 ms latency? Ambitious. Would it be feasible to rewrite the C-stuff to a shared library that you can interface to using JNI?
Thorbjørn Ravn Andersen