views:

666

answers:

7

I have a server application that, in rare occasions, can allocate large chunks of memory.

It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context.

The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx.

That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation.

Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need.

Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while.

All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly).

I'd appreciate your suggestions,

Silvio

P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

UPDATE: analyzing the app with jvisualvm, I can see that the problem is in the old generation

A: 

Try to use -server option. It will enable parallel gc and you will have some performance increase if you use multi core processor.

Artic
I'm already using it
Silvio Donnini
A: 

Have you tried playing with G1 gc? It should be available in 1.6.0u14 onwards.

mindas
+5  A: 

From here (this is a 1.4.2 page, but the same option should exist in all Sun JVMs):

assuming you're using the CMS garbage collector (which I believe the server turns on by default), the option you want is

-XX:CMSInitiatingOccupancyFraction=<percent>

where % is the % of memory in use that will trigger a full GC.

Insert standard disclaimers here that messing with GC parameters can give you severe performance problems, varies wildly by machine, etc.

Sbodd
+1  A: 

Do you know which of the garbage collection pools are growing too large?....i.e. eden vs. survivor space? (try the JVM option -Xloggc:<file> log GC status to a file with time stamps)...When you know this, you should be able to tweak the size of the effected pool with one of the options mentioned here: hotspot options for Java 1.4

I know that page is for the 1.4 JVM, I can't seem to find the same -X options on my current 1.6 install help options, unless setting those individual pool sizes is a non-standard, non-standard feature!

James B
the problem is definitely the old generation, as seen with jvisualvm
Silvio Donnini
+1  A: 

There's a very detailed explanation of how GC works here and it lists parameters to control memory available to different memory pools/generations.

Tomislav Nakic-Alfirevic
+2  A: 

When you allocate large objects that do not fit into the young generation, they are immediately allocated in the tenured generation space. This space is only GC'ed when a full-GC is run which you try to force.

However I am not sure this would solve your problem. You say "JVM is not able to perform a GC quickly enough". Even if your allocations come in bursts, each allocation will cause the VM to check if it has enough space available to do it. If not - and if the object is too large for the young generation - it will cause a full GC which should "stop the world", thereby preventing new allocations from taking place in the first place. Once the GC is complete, your new object will be allocated.

If shortly after that the second large allocation is requested in your burst, it will do the same thing again. Depending on whether the initial object is still needed, it will either be able to succeed in GC'ing it, thereby making room for the next allocation, or fail if the first instance is still referenced.

You say "I need a way [...] to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold". This by definition can only succeed, if that "good amount of memory" is not referenced by anything in your application anymore.

From what I understand here, you might have a race condition which you might sometimes avoid by interspersing manual GC requests. In general you should never have to worry about these things - from my experience an OutOfMemoryError only occurs if there are in fact too many allocations to be fit into the heap concurrently. In all other situations the "only" problem should be a performance degradation (which might become extreme, depending on the circumstances, but this is a different problem).

I suggest you do further analysis of the exact problem to rule this out. I recommend the VisualVM tool that comes with Java 6. Start it and install the VisualGC plugin. This will allow you to see the different memory generations and their sizes. Also there is a plethora of GC related logging options, depending on which VM you use. Some options have been mentioned in other answers.

The other options for choosing which GC to use and how to tweak thresholds should not matter in your case, because they all depend on enough memory being available to contain all the objects that your application needs at any given time. These options can be helpful if you have performance problems related to heavy GC activity, but I fear they will not lead to a solution in your particular case.

Once you are more confident in what is actually happening, finding a solution will become easier.

Daniel Schneller
Yes, I believe it is a race condition. I think what happens is the following: my application receives many requests for memory allocation, which slow down the threads that need the memory. While these threads are working they can't release anything, so they hold onto chunks of memory, meanwhile a garbage collection is triggered slowing the system further down, thus preventing other threads to release memory in a vicious circle.
Silvio Donnini
+1  A: 

The JVM is only supposed to throw an OutOfMemoryError after it has attempted to release memory via garbage collection (according to both the API docs for OutOfMemoryError and the JVM specification). Therefore your attempts to force garbage collection shouldn't make any difference. So there might be something more significant going on here - either a problem with your program not properly clearing references or, less likely, a JVM bug.

Dan Dyer