views:

1588

answers:

4

I launch a child process in Java as follows:

final String[] cmd = {"<childProcessName>"};
Process process = Runtime.getRuntime().exec(cmd);

It now runs in the background. All good and fine.

If my program now crashes (it is still in dev :-)) the child process still seems to hang around. How can I make it automatically end when the parent Java process dies?

If it helps, I'm using Mac OS X 10.5

A: 

Not automatic, but here's a manual solution...

In Terminal.app:

killall <childProcessName>

Use the "-s" flag to do a dry-run to be sure you are killing exactly the processes you think you are killing.

Chris Dolan
+2  A: 

I worked it out myself already. I add a shutdown hook, as follows:

final String[] cmd = {"<childProcessName>"};
final Process process = Runtime.getRuntime().exec(cmd);
Runnable runnable = new Runnable() {
    public void run() {
        process.destroy();
    }
};
Runtime.getRuntime().addShutdownHook(new Thread(runnable));
Steve McLeod
+3  A: 

As you said, addShutdownHook is the way to go.

BUT:

  • There's no real guarantee that your shutdown hooks are executed if the program terminates. Someone could kill the Java process and in that case your shutdown hook will not be executed. (as said in this SO question)

  • some of the standard libraries have their own hooks which may run before yours.

  • beware of deadlocks.

Another possibility would be to wrap your java program in a service.

VonC
A: 

I've tried the solution with a shutdown hook described above and run into the deadlock. I think the reason was the following:

Without the hook there were no references to the Process instance, and everything went according to API reference: The subprocess is not killed when there are no more references to the Process object, but rather the subprocess continues executing asynchronously.

Now, when we have a hook, the reference for the Process instance is kept in the Runnable instance we've created, and we face the problem described in the same API reference: Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the subprocess may cause the subprocess to block, and even deadlock.

As a bottom line: this may not work if the spawned process reads from the standard input (which is unlikely) and writes to standard output and/or error (which was the case).