My RMI enabled application seems to be leaking sockets. I have a Java application providing a service over RMI. It is using the Java SE 1.6 RMI implementation running on Linux. The problem I am observing is that if a client obtains a reference to my Remote object, using the Registry, and then the connection is severed abruptly (power loss, cable unplugged, etc...), the server keeps the socket open. I would expect the RMI implementation to clean up after the client's lease expires, but that is not happening. On the server, my Remote object's unreferenced()
method is called when the lease expires, but the socket remains visible in netstat in the "ESTABLISHED" state indefinitely.
Since we are not able to force the clients into any specific behavior, after several days we are hitting the default limit, 1024 in our Linux distro, for open file descriptors, causing the server to become unable to open any new sockets or files. I thought about TCP keepalives, but since RMI is abstracting away the network layer, I don't have access to the actual server socket after the connection has been established.
Is there any way to force the RMI layer to cleanup sockets tied to client connections with expired leases?
Update: The solution I used is similar to the chosen answer, but uses a different approach. I used a custom socket factory, and then wrapped the ServerSocket instance returned by createServerSocket() in a transparent wrapper that passed all methods through, except for accept(). In the accpet method, keepalives are enabled before the socket is returned.
public class KeepaliveSocketWrapper extends ServerSocket
{
private ServerSocket _delegate = null;
public KeepaliveSocketWrapper(ServerSocket delegate)
{
this._delegate = delegate;
}
public Socket accept() throws IOException
{
Socket s = _delegate.accept();
s.setKeepAlive(true);
return s;
}
.
.
.
}