views:

406

answers:

5

I have a standalone Java problem running in a linux server. I started the jvm with -Xmx256m. I attached a JMX monitor and can see that the heap never really passes 256Mb. However, on my linux system when I run the top command I can see that:

1) First of all, the RES memory usage of this process is around 350Mb. Why? I suppose this is because of memory outside of the heap?

2) Secondly, the VIRT memory usage of this process just keeps growing and growing. It never stops! It now shows at 2500Mb! So do I have a leak? But heap doesn't increase, it just cycles!

Ultimately this poses a problem because the swap of the system keeps growing and eventually the system dies.

Any ideas what is going on?


The important question I want to ask, what are some scenarios that this could be a result of my code and not the JVM, kernal, etc. For example, if the number of threads keeps growing, would that fit the description of my observations? Anything similar that you can suggest me to look out for?

A: 

Sounds like you have a leak. Can't you do profiling to see which function is driving the memory up? I am not sure though.

rohit.arondekar
A: 

If I had to take a stab in the dark, I would say that the JVM you are using has a memory leak.

tster
A: 

Swap Sun vs IBM JVM to test


  1. RES will include code + non-head data. Also, some things that you think would be stored in the heap aren't, such as the thread stack and "class data". (It's a matter of definition but code and class data are controlled by -XX:MaxPermSize=.)

  2. This one sounds like a memory leak in either the JVM implementation, the linux kernel, or in library JNI code.

If using the Sun JVM, try IBM, or vice versa.

I'm not sure exactly how dlopen works, but code accessing system libraries might be remapping the same thing repeatedly, if that's possible.

Finally, you should use ulimit to make the system fail earlier, so you can repeat tests easily.

DigitalRoss
+4  A: 

A couple of potential problems:

  • Direct allocated buffers and memory mapped files are allocated outside of the Java heap, and can't conveniently be disposed.
  • An area of stack is reserved for each new thread.
  • Permanent generation (code and interned strings) is outside of the usual stack. It can be a problem is class loaders leak (usually when reloading webapps).
  • It's possible that the C heap is leaking.

pmap -x should show how your memory has disappeared.

Tom Hawtin - tackline
there's basically a long long list of this http://pastie.org/629976 in my pmap -x. Any ideas?
erotsppa
I had a similar problem and the same kind of output. It turns out it's the stack space allocated for new threads. (I have a thread-leak)
deathy
A: 

WRT #1, it's normal for your RSS to be larger than your heap. This is because system libraries and non-Java code are included in the RSS but not the heap size.

WRT #2, Yes, it sounds like you have a leak of some sort. If the system itself is crashing, you are likely consuming too much of a system resources, like sockets, threads, or files.

Try using lsof to see what files the JVM has open. Run this a few times as your memory increases. If the JVM is crashing, be sure to set the -XX:+HeapDumpOnOutOfMemoryError option.

brianegge