views:

3584

answers:

3

We are currently using Apache 2.2.3 and Tomcat 5 (Embedded in JBoss 4.2.2) using mod_proxy_jk as the connector.

Can someone shed some light on the the correct way to calculate / configure the values below (as well as anything else that may be relevant). Both Apache and Tomcat are running on separate machines and have copious amounts of ram (4gb each).

Relevant server.xml portions:

<Connector port="8009"
           address="${jboss.bind.address}"
           protocol="AJP/1.3"
           emptySessionPath="true"
           enableLookups="false"
           redirectPort="8443"
           maxThreads="320"
           connectionTimeout="45000"
    />

Relevant httpd.conf portions:

<IfModule prefork.c>
  StartServers       8
  MinSpareServers    5
  MaxSpareServers   20
  ServerLimit      256
  MaxClients       256
  MaxRequestsPerChild  0
</IfModule>
+3  A: 

MaxClients

This is the fundamental cap of parallel client connections your apache should handle at once.

With prefork, only one request can be handled per process. Therefore the whole apache can process at most $MaxClients requests in the time it takes to handle a single request. Of course, this ideal maximum can only be reached if the application needs less than 1/$MaxClients resources per request.

If, for example, the application takes a second of cpu-time to answer a single request, setting MaxClients to four will limit your throughput to four requests per second: Each request uses up an apache connection and apache will only handle four at a time. But if the server has only two CPUs, not even this can be reached, because every wall-clock second only has two cpu seconds, but the requests would need four cpu seconds.

MinSpareServers

This tells apache how many idle processes should hang around. The bigger this number the more burst load apache can swallow before needing to spawn extra processes, which is expensive and thus slows down the current request.

The correct setting of this depends on your workload. If you have pages with many sub-requests (pictures, iframes, javascript, css) then hitting a single page might use up many more processes for a short time.

MaxSpareServers

Having too many unused apache processes hanging around just wastes memory, thus apache uses the MaxSpareServers number to limit the amount of spare processes it is holding in reserve for bursts of requests.

MaxRequestsPerChild

This limits the number of requests a single process will handle throughout its lifetime. If you are very concerned about stability, you should put an actual limit here to continually recycle the apache processes to prevent resource leaks from affecting the system.

StartServers

This is just the amount of processes apache starts by default. Set this to the usual amount of running apache processes to reduce warm-up time of your system. Even if you ignore this setting, apache will use the Min-/MaxSpareServers values to spawn new processes as required.

More information

See also the documentation for apache's multi-processing modules.

David Schmitt
Thanks, that clears up a lot of the Apache configuration options, but not how they they should relate to the settings in tomcat nor how to configure either in regards to the available resources.
Jeremy
+1  A: 

The default settings are generally decent starting points to see what your applications is really going to need. I don't know how much traffic you're expecting, so guessing at the MaxThreads, MaxClients, and MaxServers is a bit difficult. I can tell you that most of the customers I deal with (work for a linux web host, that deals mainly with customers running Java apps in Tomcat) use the default settings for quite some time without too many tweaks needed.

If you're not expecting much traffic, then these settings being "too high" really shouldn't effect you too much either. Apache's not going to allocate resources for the whole 256 potential clients unless it becomes necessary. The same goes for Tomcat as well.

f4nt
We have already exceed the default settings. A few months ago I noticed that all the available workers on Apache we being used and I adjusted the settings to above which has been a huge help. I chose them somewhat arbitrarily. We avg 100,000-350,000 hits/day.
Jeremy
+2  A: 

You should consider the workload the servers might get.

The most important factor might be the number of simultaneously connected clients at peak times. Try to determine it and tune your settings in a way where:

  • there are enough processing threads in both Apache and Tomcat that they don't need to spawn new threads when the server is heavily loaded
  • there are not way more processing threads in the servers than needed as they would waste resources.

With this kind of setup you can minimize the internal maintenance overhead of the servers, that could help a lot, especially when your load is sporadic.

For example consider an application where you have ~300 new requests/second. Each request requires on average 2.5 seconds to serve. It means that at any given time you have ~750 requests that need to be handled simultaneously. In this situation you probably want to tune your servers so that they have ~750 processing threads at startup and you might want to add something like ~1000 processing threads at maximum to handle extremely high loads.

Also consider for exactly what do you require a thread for. In the previous example each request was independent from the others, there was no session tracking used. In a more "web-ish" scenario you might have users logged in to your website, and depending on your software used, Apache and/or Tomcat might need to use the same thread to serve the requests that come in one session. In this case, you might need more threads. However as I know Tomcat at least, you won't really need to consider this as it works with thread pools internally anyways.

Zizzencs