views:

49

answers:

1

I have a Java console application that is launched from a batch script in Windows and a shell script in Linux. In both cases, any command-line arguments (which are complex) are simply passed into the java app, which interprets them using Apache Commons CLI.

Now I want to allow users to allocate additional memory to the program. The simplest approach might be to have an additional argument for this (e.g. -m 1000), but there are drawbacks. Both the batch and shell scripts would now need to interpret the command line so they can pluck out the memory argument and use it as an -Xmx parameter for the JVM. This effectively means catering for the full complexity of argument parsing in 3 places (batch/shell/java).

The only other approach that springs to mind is to have the scripts call a dummy Java application, which uses the existing parsing logic to find the new argument, then fork another JVM to run the main application, using the correct memory ceiling (using Runtime.exec()). This would get around the code duplication issue.

My question is - does this seem like a bad idea to anyone? Perhaps there are issues I'm not considering.

+1  A: 

Unless you need very high performance, I think it's OK. At work, we use the same approach for a similar problem: the user double clicks on the applications jar file; this jar file then starts another process with the right amount of memory (-Xmx512m usually).

Things to consider: can the user add additional jar files to the classpath? Can the user set a different JVM to use? If yes it might be tricky. You should also ensure the process ends. You may need to redirect the input stream, output stream and error stream, probably using separate threads.

Thomas Mueller