views:

139

answers:

3

I am having few queries regarding heap usage in my app. At instances it is observed that user activity is minimal (or nil ) on weekend but heap usage increases linearly, reaches the threshold causing the GC to kick in. I wish to analyze the cause of this heap usage and confirm if this is normal and expected. Assuming no user activity, this heap usage may be caused by daemon process such as my app's daemon process or Weblogic threads. How can I confirm which daemon process is causing this behavior and is there anything that needs to be done to resolve this ?

I have heavily tested my app's daemon process and fairly confident that its not misusing heap. But what can I do with the Weblogic's daemon process ? Should I live with it or give it a fight ?

EDIT: I am running on Weblogic 10.3 with BEA JRockit 1.6. The linear increase in heap usage is observed for about 20 hours before the garbage collector is run. I have tested my app's process with JProbe and didn't found any leaks in that.

+1  A: 

I would definitely try to hunt the problem (which is probably in your code, not in WebLogic). The problem is that you didn't give much details about your environment (e.g. WLS version, Java version, platform) or your problem (e.g. how much time it takes till the GC) so it's a bit hard to provide guidance but...

If this is an option, I'd use VisualVM to analyze this memory leak (or the profiler of your choice). If possible, try to reproduce the problem in a local environment, it will be easier. If not, here is the JMX URL:

service:jmx:iiop:///jndi/iiop://host:port/weblogic.management.mbeanservers.runtime

Just make sure to enable Anonymous Admin Lookup: go to Domain > Security > General and check the Anonymous Admin Lookup Enabled checkbox).

Also make sure to Enable IIOP Protocol for Admin Server and Application Servers: go to Server > Protocol, and check the IIOP checkbox). Also be sure that

Another option would be to take some heap dumps and to analyze them with a tool like Eclipse MAT.

Update: Since it takes around 20 hours before the GC, I would schedule a task to generate some heap dumps (e.g. one per hour) and analyze them to find out what objects eat memory over time. This might give an hint on the culprit process.

Pascal Thivent
Thanks for your input, I have added more details in my question.
Ravi Gupta
+1  A: 

Seems like the normal behavior of a J2EE server. Even if there is no load at all on your apps there will be some activity on the server (House keeping, monitoring etc) that will keep creating objects. Even the activity of analyzing the heap usage as you are doing will create objects.

Im thinking the rate of object creation is very minimal given the fact it take 20 hours to fill the heap and kick off a GC cycle. A bit more details regarding your concern might help

i.e. - Are you getting OutOfMemory errors - Whats are the JVM startup args? (Heap sizes, Garbage collector type, etc..)

maneesh
+1  A: 

Take a look at prstat on the box for any other jobs running on the weekend. It is quite common that cron/backup jobs etc run and hit your server and the team has forotten why those were set up in the first place.

JoseK
Hmm...yep sounds logical. Will check it out, thanks.
Ravi Gupta