In our application, we are using RMI for client-server communication in very different ways:
- Pushing data from the server to the client to be displayed.
- Sending control information from the client to the server.
- Callbacks from those control messages code paths that reach back from the server to the client (sidebar note - this is a side-effect of some legacy code and is not our long-term intent).
What we would like to do is ensure that all of our RMI-related code will use only a known specified inventory of ports. This includes the registry port (commonly expected to be 1099), the server port and any ports resulting from the callbacks.
Here is what we already know:
- LocateRegistry.getRegistry(1099) or Locate.createRegistry(1099) will ensure that the registry is listening in on 1099.
- Using the UnicastRemoteObject constructor / exportObject static method with a port argument will specify the server port.
These points are also covered in this Sun forum post.
What we don't know is: how do we ensure that the client connections back to the server resulting from the callbacks will only connect on a specified port rather than defaulting to an anonymous port?
EDIT: Added a longish answer summarizing my findings and how we solved the problem. Hopefully, this will help anyone else with similar issues.
SECOND EDIT: It turns out that in my application, there seems to be a race condition in my creation and modification of socket factories. I had wanted to allow the user to override my default settings in a Beanshell script. Sadly, it appears that my script is being run significantly after the first socket is created by the factory. As a result, I'm getting a mix of ports from the set of defaults and the user settings. More work will be required that's out of the scope of this question but I thought I would point it out as a point of interest for others who might have to tread these waters at some point....