views:

952

answers:

5

Using the Sun Java VM 1.5 or 1.6 on Windows, I connect a non-blocking socket. I then fill a ByteBuffer with a message to output, and attempt to write() to the SocketChannel.

I expect the write to complete only partially if the amount to be written is greater than the amount of space in the socket's TCP output buffer (this is what I expect intuitively, it's also pretty much my understanding of the docs), but that's not what happens. The write() always seems to return reporting the full amount written, even if it's several megabytes (the socket's SO_SNDBUF is 8KB, much, much less than my multi-megabyte output message).

A problem here is that I can't test the code that handles the case where the output is partially written (registering an interest set of WRITE to a selector and doing a select() to wait until the remainder can be written), as that case never seems to happen. What am I not understanding?

A: 

I'll make a big leap of faith and assume that the underlying network provider for Java is the same as for C...the O/S allocates more than just SO_SNDBUF for every socket. I bet if you put your send code in a for(1,100000) loop, you would eventually get a write that succeeds with a value smaller than requested.

Clay
Thanks for the idea. I just did a few hundred multi-megabyte writes and not a single one of them returned with a value less than the full amount.
jpdaigle
Any chance you can watch the traffic with a sniffer and see how the packets are being sent? Might give you a clue if the sending library is even trying to break up the packets into smaller than MTU chunks.
stu
+1  A: 

I've been working with UDP in Java and have seen some really "interesting" and completely undocumented behavior in the Java NIO stuff in general. The best way to determine what is happening is to look at the source which comes with Java.

I also would wager rather highly that you might find a better implementation of what you're looking for in any other JVM implementation, such as IBM's, but I can't guarantee that without look at them myself.

Spencer K
+3  A: 

I managed to reproduce a situation that might be similar to yours. I think, ironically enough, your recipient is consuming the data faster than you're writing it.

import java.io.InputStream;
import java.net.ServerSocket;
import java.net.Socket;

public class MyServer {
  public static void main(String[] args) throws Exception {
    final ServerSocket ss = new ServerSocket(12345);
    final Socket cs = ss.accept();
    System.out.println("Accepted connection");

    final InputStream in = cs.getInputStream();
    final byte[] tmp = new byte[64 * 1024];
    while (in.read(tmp) != -1);

    Thread.sleep(100000);
  }
}



import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SocketChannel;

public class MyNioClient {
  public static void main(String[] args) throws Exception {
    final SocketChannel s = SocketChannel.open();
    s.configureBlocking(false);
    s.connect(new InetSocketAddress("localhost", 12345));
    s.finishConnect();

    final ByteBuffer buf = ByteBuffer.allocate(128 * 1024);
    for (int i = 0; i < 10; i++) {
      System.out.println("to write: " + buf.remaining() + ", written: " + s.write(buf));
      buf.position(0);
    }
    Thread.sleep(100000);
  }
}

If you run the above server and then make the above client attempt to write 10 chunks of 128 kB of data, you'll see that every write operation writes the whole buffer without blocking. However, if you modify the above server not to read anything from the connection, you'll see that only the first write operation on the client will write 128 kB, whereas all subsequent writes will return 0.

Output when the server is reading from the connection:

to write: 131072, written:  131072
to write: 131072, written:  131072
to write: 131072, written:  131072
...

Output when the server is not reading from the connection:

to write: 131072, written:  131072
to write: 131072, written:  0
to write: 131072, written:  0
...
Alexander
A: 

You really should look at an NIO framework like MINA or Grizzly. I've used MINA with great success in an enterprise chat server. It is also used in the Openfire chat server. Grizzly is used in Sun's JavaEE implementation.

Heath Borders
A: 

Where are you sending the data? Keep in mind that the network acts as a buffer that is at least equal in size to your SO_SNDBUF plus the receiver's SO_RCVBUF. Add this to the reading activity by the receiver as mentioned by Alexander and you can get a lot of data soaked up.

Darron