views:

148

answers:

3

Hi,

I have a situation here at work where we run a JEE server with several applications deployed on it. Lately, we've been having frequent OutOfMemoryException's. We suspect some of the apps might be behaving badly, maybe leaking, or something.

The problem is, we can't really tell which one. We have run some memory profilers (like YourKit), and they're pretty good at telling what classes use the most memory. But they don't show relationships between classes, so that leaves us with a situation like this: We see that there are, say, lots of Strings and int arrays and HashMap entries, but we can't really tell which application or package they come from.

Is there a way of knowing where these objects come from, so we can try to pinpoint the packages (or apps) that are allocating the most memory?

Thank you in advance.

A: 

A quick thought is that you probably can do some reflection, if you don't mind some performance trade-off....

Paul
+3  A: 

There are several things that one could do in this situation:

  • Configure the JEE application server to produce a heap dump on OOME. This feature is available via a JVM parameter since the 1.5 days. Once a dump has been obtained, it can be analyzed offline, using tools like Eclipse MAT. The important part is figuring out the dominator tree.
  • Perform memory profiling on a test server; Netbeans is good at this. This is bound to take more time that the first when it comes to analyzing the root cause, since the exact conditions of memory allocation failure must be present. If you do have automated integration/functional tests, then deducing the root cause will be easier. The trick is to take periodic heap dumps, and analyze the classes that are contributing to the increase in heap consumption. There might not necessarily be a leak - it could be a case of insufficient heap size.
Vineet Reynolds
A: 

What I have found helpful is:

jmap -J-d64 -histo $PID

(remove the -J-d64 option for 32-bit arch)

This will output something like this:

num     #instances         #bytes  class name
----------------------------------------------
1:       4040792     6446686072  [B
2:       3420444     1614800480  [C
3:       3365261      701539904  [I
4:       7109024      227488768  java.lang.ThreadLocal$ThreadLocalMap$Entry
5:       6659946      159838704  java.util.concurrent.locks.ReentrantReadWriteLock$Sync$HoldCounter

And then from there you can try to further diagnose the problem, doing diffs and what not to compare successive snapshots.

This will only pause the VM for a brief time, even for big heaps, so you can safely do this in production (during off-peak hours, hopefully :) )

sehugg
Or just run `jvisualvm` if you're on a local machine (you can't do heap dumps remotely)
sehugg