views:

860

answers:

3

I have a collection of webapps that are running under tomcat. Tomcat is configured to have as much as 2 GB of memory using the -Xmx argument.

Many of the webapps need to perform a task that ends up making use of the following code:

Runtime runtime = Runtime.getRuntime();
Process process = runtime.exec(command);
process.waitFor();
...

The issue we are having is related to the way that this "child-process" is getting created on Linux (Redhat 4.4 and Centos 5.4).

It's my understanding that an amount of memory equal to the amount tomcat is using needs to be free in the pool of physical (non-swap) system memory initially for this child process to be created. When we don't have enough free physical memory, we are getting this:

    java.io.IOException: error=12, Cannot allocate memory
     at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
     at java.lang.ProcessImpl.start(ProcessImpl.java:65)
     at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
     ... 28 more

My questions are:

1) Is it possible to remove the requirement for an amount of memory equal to the parent process being free in the physical memory? I'm looking for an answer that allows me to specify how much memory the child process gets or to allow java on linux to access swap memory.

2) What are the alternatives to Runtime.getRuntime().exec() if no solution to #1 exists? I could only think of two, neither of which is very desirable. JNI (very un-desirable) or rewriting the program we are calling in java and making it it's own process that the webapp communicates with somehow. There has to be others.

3) Is there another side to this problem that I'm not seeing that could potentially fix it? Lowering the amount of memory used by tomcat is not an option. Increasing the memory on the server is always an option, but seems like more a band-aid.

Servers are running java 6.

EDIT: I should specify that I'm not looking for a tomcat specific fix. This problem can be seen with any of the java applications we have running on the webserver (there are multiple). I simply used tomcat as an example because it will most likely have the most memory allocated to it and it's where we actually saw the error the first time. It is a reproducible error.

EDIT: In the end, we solved this problem by re-writing what the system call was doing in java. I feel that we were pretty lucky being able to do this without making additional system calls. Not all processes will be able to do this, so I would still love to see an actual solution to this.

+1  A: 

Try using a ProcessBuilder. The Docs say that's the "preferred" way to start up a sub-process these days. You should also consider using the environment map (docs are in the link) to specify the memory allowances for the new process. I suspect (but don't know for certain) that the reason it needs so much memory is that it is inheriting the settings from the tomcat process. Using the environment map should allow you to override that behavior. However, note that starting up a process is very OS specific, so YMMV.

Ian McLaird
wow, this is cool
seanizer
I had looked at ProcessBuilder and specifically it's ability to modify the environment. You can actually do that with Runtime as well. However, I couldn't find any information on exactly what environmental variable I should change to keep the initial memory low. I printed out the current environments of some processes and a memory modifying parameter was not apparent. Can you point out how I could control the memory allocation with environment variables? In Linux specifically?
twilbrand
A: 

I think this is a unix fork() issue, the memory requirement comes from they way fork() works -- it first clones the child process image (at whatever size it currently is) then replaces the parent image with the child image. I know on Solaris there is some way to control this behavior, but I don't know offhand what it is.

Update: This is already explained in http://stackoverflow.com/questions/209875/from-what-linux-kernel-libc-version-is-java-runtime-exec-safe-with-regards-to-m

Justin
I didn't actually see anything in that article that answered my questions. One action I saw in there was "decrease the amount of memory being used by the parent process" (not an option for us) whether with ulimit or java opts. The other was the same as luke's answer above, which is to make a separate process that uses less memory. This is far from ideal, but at least plausible.
twilbrand
A: 

I found a workaround in this article, basically the idea is that you create a process early on in the startup of your application that you communicate with (via input streams) and then that subprocess executes your commands for you.

//you would probably want to make this a singleton
public class ProcessHelper
{
    private OutputWriter output;
    public ProcessHelper()
    {
        Runtime runtime = Runtime.getRuntime();
        Process process = runtime.exec("java ProcessHelper");
        output_stream = new OutputWriter(process.getOutputStream());
    }
    public void exec(String command)
    {
        output.write(command, 0, command.length());
    }
}

then you would make a helper java program

public class ProcessHelper
{
    public static void main(String[] args)
    {
         BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
         String command;
         while((command = in.readLine()) != null)
         {
             Runtime runtime = Runtime.getRuntime();
             Process process = runtime.exec(command);
         }
    }
}

what we've essentially done is make a little 'exec' server for your application. If you initialize your ProcessHelper class early on in your application it will successfully create this process then you simply pipe commands over to it, because the second process is much smaller it should always succeed.

You could also make your protocol a little more in depth, such as returning exitcodes, notifying of errors and so on.

luke
This will not work propably for a webapp. The webapp is deployed anytime during the server is running - consuming any amount of memory that time.
Arne Burmeister
@Arne Burmeister - that is true, if you are running this on a shared TomCat instance then you might have an arbitrarily large amount of allocated to the process by the time this runs. But if you are running on a nonshared TomCat Server (this is the only application) and you can hook into the application startup to execute this it should work.
luke
As I recently state above, I'm not looking for a tomcat specific answer. We have tomcat with multiple webapps as well as multiple other java processes that could all potentially see the same problem.
twilbrand