views:

78

answers:

5

I am currently trying to build my project using hudson to call maven. I keep getting the problem of out of memoery error. I set the xmx and xms in all environmental variable, hudson configuration and hudson project config. I set the xmx to 1500 mb which should be more than enough as the whole project is less than 1000mb. the machine used to build the project is a server where the maven repo for the team is stored.

Anyone have come acrossed the same problem? Any idea of how it happen?

A: 

Assuming you are using Sun's JDK, then in Hudson / Manage Hudson / Maven Project Configuration / Global MAVEN_OPTS set the following: -Xmx512m -XX:MaxPermSize=256m

Eugene Kuleshov
I have done that already.
Javabeginner
From your description is unclear what options you've used. It is all case sensitive and you need to be precise. It would help to see the exact error text you are getting, e.g. is it from the Hudson, from for the main Maven process of for one of the child processes launched by Maven to run tests and other stuff.
Eugene Kuleshov
A: 

Hudson kicks off a separate task to run Maven jobs. You will need to configure the increased memory in the MAVEN_OPTS text field. The field is located in individual job configuration windows.

Edit: Following up with your comments. Are you by chance forking your compile, running it in a separate execution or forking your junit testing.

Try in your compiler configuration (assuming you have one):

<maxmem>512m</maxmem>
John V.
I have done that already.
Javabeginner
@Javabeginner take a look at my edits. Maybe that information can help.
John V.
Thanks for the update, I did try to run the build with testes seperated, apparently the build run fine without testes and I presume that is the the test that causes the out of memory error. By saying testes, we uses the maven surefire plugin for run those testes. I just read an article saying that there may a chance either Hudson, Maven, JBoss or even window (the OS my server run on) cannot support the xmx config. I just double checked, the server have 4gb mem and the xmx is set to 1500mb. Compiler config? is that mean Maven config for my case?
Javabeginner
Windows shouldnt have any issue with the Xmx vm argument. Try to look at what hudson is doing with the maven executable. See if it is in fact running the maven command with the vm arguments. If thats the case, then an OOM with 1.5gigs is more then likely a problem with your tests
John V.
Its easy in linux a ps -aef will show the maven process with the arguments. Its probably launched using the classworlds jar
John V.
A: 

Do make sure you have enough MaxPermSpace. I've run into problems where the memory allocated to the JVM was sufficient, but the OutOfMemoryError was due to the PermSpace being exhausted. That is not too uncommon when we are dealing with compiling code--particularly if it is compiling code, throwing it away and compiling again. For more information about tuning the garbage collector (and memory) check out these references:

http://www.oracle.com/technetwork/java/gc-tuning-5-138395.html http://www.oracle.com/technetwork/java/javase/memorymanagement-whitepaper-150215.pdf

In the Memory Management Whitepaper on pages 16-17, it outlines possible reasons for OutOfMemoryErrors. Another defense is to fork the maven process and/or the compiler.

Berin Loritsch
I am certain that the out of memory error I got wasn't realated to the permgen space as we actually move on the this out of memory error from the permgen space error
Javabeginner
It is difficult to troubleshoot a system without getting my hands on it directly. Did the docs I linked to help you figure out things to try? You're most likely going to have to profile the garbage collector while its running to help you find the reason. The docs tell you how to do that.
Berin Loritsch
+1  A: 

If you get an OOM during the tests, then you must tell the surefire plugin to fork a new VM for the tests:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.5</version>
    <configuration>
        <forkMode>once</forkMode>
        <argLine>-Xms512m -Xmx512m</argLine>
    </configuration>
</plugin>
Aaron Digulla
I have just done a Java profiling on the module using Vision VM, by comparing the profile with other module, the failing module seems to have a lot of sleeping threads, whereas there are none for the other modules. Is that the reason for the Out of memory error?
Javabeginner
We had to do this because a lot of our older unit tests were not cleaning up threads and other resources they used. When we asked the author of the ant script, he told us it was easier to fork a new vm instead of fixing the unit tests. *sigh* So much technical debt.
Dan
@Javabeginner: Probably. You must close all files, set all references to null and cleanup anything your tests use in tearDown(): JUnit will create one instance of your test classes *per test* and keep them around until *all tests have run*.
Aaron Digulla
+1  A: 

Thank you everyone for answering my question. I have solved the problem by making a heap dump and analysing it. I make a heap dump by -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=E:/. I then use Eclipse Memory Analyser to open the jave_pidxxxxx.hprof .

I found out that the listener we used to catch the exception cannot catch the exception. So the exception sort of stay in the VM and hence, memory leak!

Thanks again for all the answers

Javabeginner