views:

250

answers:

4

I have configured a simple LAMP stack on Debian and I am experiencing some problems with the Apache web server.

Each 3-4 hours the web server is entering a deadlock and all the requests that hit the database block. The server is creating a new child for each request. The number of processes increases very quickly. After a few seconds Monit notices something is wrong and restarts the Apache server.

I suspect this problem is generated by the way PHP handles database connection pooling because the server is still able to answer static content requests. Have you experienced this kind of behavior? What should I try to do?

Update: Problem solved. It seems it's a bad idea to use APC for opcode caching and user data. I am now using Memcache for storing user data and APC only for code. I still get some segmentation faults from time to time but the server is most of the time stable.

+1  A: 

Why don't you have a look at the logs? /var/log/apache2/* is a good place to start. What is requested just before the server dies? From there on, you can probably deduce what's going wrong. As php scripts are terminated after 30 seconds by default, the mistake needs to be quite massive to cause something like that.

phihag
There are no messages in the error log. I have run strace on a blocked child and it was waiting after a futex. I think there is a problem in the code connecting php with the mysql server because in the same time the server was able to handle requests to static content.
Andrei Savu
+1  A: 

Check your timeout settings in /etc/apache2/apache2.conf, I have seen similar problems when Timeout is set high and the system gets hit with a bunch of dropped connections.

Brian C. Lane
the timeout is set to 100. is this value too high ?
Andrei Savu
Unless you are getting slammed by a worm that drops connections without closing them that should be fine (I use 60 myself).
Brian C. Lane
+3  A: 

I would suspect that the problems are:

  • A difficult long-running database query which blocks further requests. This is fairly easy if you're using the MySQL MyISAM engine which has only table-level locking and readers can easily block writers and vice versa, so a single tricky query on, say a user table, can pretty much block the entire server while the database waits for I/O. You can usually diagnose this by using "SHOW PROCESSLIST" or a tool which does this for you.
  • Having set MaxClients much too high for the RAM available on a prefork server - almost everyone does this. If you are using a "fat" prefork Apache (e.g. with in-process PHP), then don't set MaxClients higher than you have enough ram for. This is probably a lot less than typical values of 100 or 150.

These two things conspire to cause the issue you're seeing. They both need to be fixed as they can cause problems alone.

This is based entirely on guesswork and experience.

MarkR
We are using InnoDB. I will improve our monitoring scripts to save the output of SHOW PROCESS LIST when Apache blocks but I think we have no strange query running. The server has plenty of ram, the swap memory is never used.
Andrei Savu
A: 

The mysql-slow log is also useful for finding slow problem-causing queries.

sjbotha