views:

126

answers:

3

Hi there

I'm trying to understand why out ColdFusion 9 (JRun) server is throwing the following error:

java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?

The JVM arguments are as follows:

-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -

I had jconsole running when the dump happened and I am trying to reconcile some numbers with the -XX:MaxPermSize=192m setting above. When JRun died it had the following memory usage:

Heap
 PSYoungGen      total 136960K, used 60012K [0x5f180000, 0x67e30000, 0x68d00000)
  eden space 130624K, 45% used [0x5f180000,0x62c1b178,0x67110000)
  from space 6336K, 0% used [0x67800000,0x67800000,0x67e30000)
  to   space 6720K, 0% used [0x67110000,0x67110000,0x677a0000)
 PSOldGen        total 405696K, used 241824K [0x11500000, 0x2a130000, 0x5f180000)
  object space 405696K, 59% used [0x11500000,0x20128360,0x2a130000)
 PSPermGen       total 77440K, used 77070K [0x05500000, 0x0a0a0000, 0x11500000)
  object space 77440K, 99% used [0x05500000,0x0a043af0,0x0a0a0000)

My first question is that the dump shows the PSPermGen being the problem - it says the total is 77440K, but it should be 196608K (based on my 192m JVM argument), right? What am I missing here? Is this something to do with the other non-heap pool - the Code Cache?

I'm running on a 32bit machine, Windows Server 2008 Standard. I was thinking of increasing the PSPermGen JVM argument, but I want to understand why it doesn't seem to be using its current allocation.

Thanks in advance!

+2  A: 

"ChunkPool::allocate. Out of swap space" usually means the JVM process has failed to allocate memory for its internal processing.

This is usually not directly related to your heap usage as it is the JVM process itself that has run out of memory. Check the size of the JVM process within windows. You may have hit an upper limit there.

This bug report also gives an explanation. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5004956

This is usually caused by native, non java objects not being released by your application rather than java objects on the heap.

Some example causes are:

  • Large thread stack size, or many threads being spawned and not cleaned up correctly. The thread stacks live in native "C" memory rather than the java heap. I've seen this one myself.
  • Swing/AWT windows being programatically created and not dispoed when no longer used. The native widgets behind AWT don't live on the heap as well.
  • Direct buffers from nio not being released. The data for the direct buffer is allocated to the native process memory, not the java heap.
  • Memory leaks in jni invocations.
  • Many files opened an not closed.

I found this blog helpfull when diagnosing a similar problem. http://www.codingthearchitecture.com/2008/01/14/jvm_lies_the_outofmemory_myth.html

Aaron
Thanks, it'll read all this. Any comment on the JVM PSPermGen argument versus the fact that my dump is showing a much smaller amount of PSPermGen memory to be 99% full? Why not the full allocation? Is THAT being used by the "C" memory?
Ciaran Archer
@Ciaran The MaxPermSize only sets the upper limit on the PSPermGen, The JVM is free to allocate less. It is normal for PermGen to be almost full, the JVM rarely allows much slack there. If you run a tool like visual VM against you application it will generate a graph of the PermGen so you can see how the heap is managed in real timeI suspect the full limit of PermGen is not being allocated as you are getting the ChunkPool::allocate error before the your application needs 192m of PermGen
Aaron
Also, there is a special error message for when you run out of PermGen space: It is "java.lang.OutOfMemoryError: PermGen space"
Aaron
OK, so it seems that the PermGen space might NOT actually be my problem, despite it showing as 99% full in my dump? Bloody hell. Does anyone know how to measure 'swap' space using windows perfmon counters?
Ciaran Archer
+3  A: 

An "out of swap space" OOME happens when the JVM has asked the operating system for more memory, and the operating system has been unable to fulfill the request because all swap (disc) space has already been allocated. Basically, you've hit a system-wide hard limit on the amount of virtual memory that is available.

This can happen through no fault of your application, or the JVM. Or it might be a consequence of increasing -Xmx etc beyond your system's capacity to support it.

There are three approaches to addressing this:

  • Add more physical memory to the system.

  • Increase the amount of swap space available on the system; e.g. on Linux look at the manual entry for swapon and friends. (But be careful that the ratio of active virtual memory to physical memory doesn't get too large ... or your system is liable to "thrash", and performance will drop through the floor.)

  • Cut down the number and size of processes that are running on the system.

If you got into this situation because you've been increasing -Xmx to combat other OOMEs, then now would be good time to track down the (probable) memory leaks that are the root cause of your problems.

Stephen C
@Stephen - I've configured the JVM with -XX:+HeapDumpOnOutOfMemoryError and I'm going to use the Eclipse Memory Management plugin to get to the bottom of this leak when it happens next. In the meantime I might consider decreasing the heap size for now.
Ciaran Archer
I was indeed a memory leak! I've written about my experience here in the hope it might help others: http://ciaranarcher.tumblr.com/post/1003177746/coldfusion-out-of-stack-space-unfortunately-its
Ciaran Archer