views:

131

answers:

1

Dear community,

We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application.

We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle.

So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues.

We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load.

Has anyone else experienced similar problems?

Are there any other system metrics we should monitor that could explain the queuing behavior?

Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue?

Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so.

/Ben

A: 

It smells a lot like a deadlock situation.

I would verify the number of tomcat connections : if these also max out then it is almost 100% certain that it is app or db related.

Check for locks in the database when this happens. This might give a clue.

If you use Stateful Session Beans in the back end I would give them a good looking over.

Peter Tillemans