views:

224

answers:

4

Following on from my last question:

http://stackoverflow.com/questions/2251051/performance-issue-using-javas-object-streams-with-sockets

I'm looking at socket performance on Linux. With the above example I get a round trip time of ~65μsec. If I make two fifos on the file system this goes down to ~45μsec. The extra time using localhost sockets must be because I'm hitting the network stack.

Is there some OS configuration that can make a localhost socket go as fast as a named pipe?

uname -a
Linux fiatpap1d 2.4.21-63.ELhugemem #1 SMP Wed Oct 28 23:12:58 EDT 2009 i686 athlon i386 GNU/Linux

Thanks in advance!

+1  A: 

I can't help you on the Java front, but you could take a look at UNIX domain sockets. Here's a question with discussion on how to use them in Java:

http://stackoverflow.com/questions/170600/unix-socket-implementation-for-java

Shtééf
Yes, I could use something jni based. I'd prefer not to do this if at all possible. I was hoping that some later version of Linux would do this optimisation for me.
Jonathan
+1  A: 

Your prior questions make two false assumptions:

  1. ICMP_ECHO (a.k.a. ping) yields meaningful timing information. It doesn't, for among other things, the ICMP layer can be (and should be) at low service priority.
  2. That marshaling the data through umpteen Java interfaces is not the bottleneck. Because it is.

Your testing methods are highly suspect. What are you trying to accomplish?

msw
Hell yeah. Great answer. Damn those umpteen Java interfaces and their performance cost.
Matt Joiner
On 1. ping is giving me ~ the same time as my test using named pipes.On 2. How have you come to that conclusion?
Jonathan
+1  A: 

With the above example I get a round trip time of ~65μsec. If I make two fifos on the file system this goes down to ~45μsec. The extra time using localhost sockets must be because I'm hitting the network stack.

Yes, and that is to be expected.

FIFOs are rather primitive communication method. Their state is essentially a bool variable. Reads and writes go through the same pre-allocated buffer of fixed size. Thus the OS can and does optimize the operations.

Sockets are more complex. Their have full fledged TCP's state machine. The buffering is dynamical and bidirectional (recv, send are buffered separately). That means when you write something into local socket, you pretty much always have some sort of dynamic memory management involved. Linux tries to avoid that as much as possible: zero-copy/single-copy tricks are implemented all over the place. Yet obviously since the calls have to go through more code they would be slower.

In the end, considering how much more sockets are compared to FIFOs, 20us difference frankly is a statement about how good the Linux' socket performance is.

P.S. 65us rtt = ~35us in one direction. 1s/35us =~ 30K packets per second. For network code without optimizations using single connection that sounds about right.

Dummy00001