views:

1401

answers:

6

We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk.

Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS)

In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served.

We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port.

Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know.

Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy?

The versions I used are as follows:

Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4

Thanks for the help.

+1  A: 

We are experiencing similar issues. We are still working on solutions, but it looks like alot of answers can be found here:

http://www.jboss.org/community/wiki/OptimalModjk12Configuration

Good luck!

Naganalf
+1  A: 

You should also take a look at the JBoss Jira issue, titled "AJP Connector Threads Hung in CLOSE_WAIT Status":

https://jira.jboss.org/jira/browse/JBPAPP-366

Stephen Souness
A: 

Did you ever resolve this? I'm seeing the same thing with my AJP threads in the state of "Runnable" when all of our AJP threads are pegged. We usually average around 20 AJP threads running with a max of 100. When it pegs and uses all 100 threads our thread dump shows that at least 95+ of them in a state of:

"ajp-ncc20.ghx.com%2F10.100.10.40-8009-80" daemon prio=3 tid=0x01232d68 nid=0xac2 runnable [0x58f7f000..0x58f7fbf0] at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:129) at org.apache.coyote.ajp.AjpProcessor.read(AjpProcessor.java:1012) at org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:1091) at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:384) at org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:366) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446) at java.lang.Thread.run(Thread.java:595)

After 10-20 minutes the threads recover and go back to 20 or so in use most of those are in a "wait" state if we do a thread dump and can seemingly accept new connections.

"ajp-ncc20.ghx.com%2F10.100.10.40-8009-19" daemon prio=3 tid=0x022c9570 nid=0x86d in Object.wait() [0x5df7f000..0x5df7fb70] at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:474) at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:415) - locked <0x7d233430> (a org.apache.tomcat.util.net.JIoEndpoint$Worker) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:441) at java.lang.Thread.run(Thread.java:595)

Anything input would be very helpful.

Seth Annabel
Yes we could resolve it as it was the problem with Hibernate Configuration and not with AJP or JBOSS. We made some optimizations and we could have it done. Let me know if you want any help on it.
Beginner
A: 

Hi, yes I'm still having this problem. I'd be interested in getting some help on it if you have time to share what you did.

Seth Annabel
I am posting my answer below and not in this comment as it include some codebase.
Beginner
A: 

What we did for sorting this issue out is as follows:

 <property name="hibernate.cache.use_second_level_cache">false</property>


 <property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
    <property name="hibernate.search.Rules.directory_provider">
        org.hibernate.search.store.RAMDirectoryProvider 
    </property>

    <property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>

    <property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
    <property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>

    <property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
    <property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>

 <property name ="hibernate.search.reader.strategy">not-shared</property>   
 <property name ="hibernate.search.worker.execution">async</property>   
 <property name ="hibernate.search.worker.thread_pool.size">100</property>  
 <property name ="hibernate.search.worker.buffer_queue.max">300</property>  

 <property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>   
 <property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>  

 <property name ="hibernate.search.indexing_strategy">manual</property> 

Above parameters ensured that the worker threads are not blocked by lucene and hibernate search. Default optimizer of hibernate made our life easy, thus I consider this setting very important.

Also removed the C3P0 connection pooling and used inbuilt JDBC connection pooling, thus we commented below section.

 <!--For JDBC connection pool (use the built-in)-->


 <property   name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
    <!-- DEPRECATED very expensive property name="c3p0.validate>-->
    <!-- seconds -->

After doing all this, we were able to reduce considerably the time which an AJP thread was taking to serve a request and threads started coming to R state after serving the request i.e. in S state.

Beginner
A: 

There is a bug in tomcat 6 that was filed recently. It's in regards to the HTTP connector but the symptoms sound the same.

https://issues.apache.org/bugzilla/show_bug.cgi?id=48843#c1

Mark