views:

90

answers:

5

I have a Java client - server application. The client is designed to run arbitrary user code. If the user code running on the client creates an OutOfMemoryError then the client ends up in an ugly state. Normally the client would be sending messages (via RMI) to the server until the code the client is running terminates and the client gracefully disconnects with the server.

What would people recommend for the OOM situation on the client? Could I catch it and kill the client process? Presumably I would not be able to push any commands out from the server because the client will be unresponsive.

If possible, I would like the client process to be terminated without having to log on to the client machine and kill it "by hand". Clearly it would be best if the user code did not cause this error in the first place, but that is up to the user to debug and my framework to deal with in as friendly way as possible. Thanks for your ideas!

+2  A: 

It's a bad idea to catch OutOfMemoryError. If you catch it, you'll probably not have enough memory to kill the process anyway...

One good practice when developing a server side application is to use a socket timeout. If your client doesn't send any command for a given amount of time, the connection is dropped. This makes your server more reliable, more secure, and prevents situations like yours happening.

Another thing you can do is to try to make your client app "OutOfMemoryError proof", not in the way that it can't run out of memory, but in the way that it shouldn't make your application crash.

Vivien Barousse
Hmm, maybe this is sending me on the right path. I could have the server send messages to the client just to make sure it's still there, and then catch a timeout that way.
Ben Flynn
+1: OutOfMemory is JVM out of memory. Not just one process. and for suggesting Timeout.
Nivas
A: 

You could do a few things. 1 is spawn the user code into it's own process and watch it. If it failed, you could then notify the back end server of the failure so it could clean up the connection.

You could use a stateless server (which may not be possible) with asynchronous communication protocols (such as HTTPInvoker) instead of RMI. (Again, this may not be possible depending on what you are trying to do).

You could watch the memory usage in the client and kill the thread that is running the client code when the memory hits a certain point, and then have the watcher close out the connection, notifying the server of the problem.

Or, as Vivien Barousse mentioned, you could have a low timeout on the server side to prevent the situation.

Hope these help.

aperkins
Yeah... with spawning if I hold on to a Process object, I should be able to destroy the process that runs out of memory, right?
Ben Flynn
I believe so - it has been a long time (5+ yearsish) since I have done that, but we did it when I was teaching. We used to to run student assignments and to check the results, and we would kill the process if it took too long. And since it was in it's own process, if it crashed itself by running out of memory that was OK too.
aperkins
+1  A: 

You could define some 'reserve memory' you could catch the exception and deal with it (at least in most cases). But you need to make sure the chunk is large enough. This works as follows:

static byte[] reserveMemory = new byte[1024 * 1024];
try {
    ...
} catch (OutOfMemoryError e) {
    reserveMemory = null;
    cleanUpAndExit();
}
Thomas Mueller
Hmm, fascinating technique. I've never thought of this. I'll have to try it some time.
Jay
But once you get OutOfMemory, I dont think we will have time till cleanup. The possibility of cleanUpAndExit getting executed is pretty remote (if not possible) I think.
Nivas
@Nivas - the possibility of cleanUpAndExit getting executed is almost 100%, unless it tries to allocate (explicitly or implicitly) a lot of memory. It will fail if it tries to allocate more than 1 MB of course. I guess you didn't notice the `reserveMemory = null`
Thomas Mueller
@Thomas Mueller Got it.
Nivas
+1  A: 

Wrap a Java client program by another program -- maybe written in C++, Java or any other language (it must not work on the same VM as your client) -- that restart or log an error message.

Your client should log its state (create checkpoints*) every x operation/(...)/on application start and close. So you can implement a functionality in the client that clean up mess -- based on checkpoint info -- if a client is restarted with info that it crashed before. So the wrapper does not need to have any complicated functionality.

ps. OutOfMemory exception is a terminal error so there is no sense to catch it. Of course, sometimes there is enough memory to do something but it is not a rule

Skarab
+1  A: 

If you're routinely getting out-of-memory errors, I think the real solution is not to find a way to catch them, as all that that accomplishes is allowing you to die a less dramatic death. The real thing to do is rework your application to not keep running out of memory. What are you doing that you're running out of memory? If you're reading a bunch of data off a database and trying to process it all at once, maybe you can read it in smaller chunks, like one record at a time. If you're creating animations, maybe you can do one frame at a time instead of holding it all in memory at once. Etc.

Jay
If I had control over the code that runs inside my process, I'd totally do that. =)
Ben Flynn
@Ben: Ah. Bummer, man. So you can't stop the murder, all you can do is clean up the bodies.
Jay
@Jay, a great comment! :)
Skarab