views:

263

answers:

2

I have a LAMP application running fine, however the number of users are increasing each day. I don't want to be hit with a surprise one morning and find that everything broke because of overload. Is there a way to get a rough estimate of what capacity of the LAMP it is at?

I know that a full detailed report is many books worth of study but can I get some quick litmus test to see if things are running fine.

So say for the mysql component, how can I tell how much more load can it take? Is it at 30% capacity, 50%? etc.

Same for my apache. Although I have a feeling the DB will die before apache.


Perhaps my original was not too good, as English is not my native language. What I am really asking is a way to measure the current load. And then have a way to estimate based on the that load, how much further can I go before it fails. (And this should be done seperately for each component, mysqld, httpd)

+2  A: 

For the current load, there are a couple things your can do. The most expensive, yet most detailed answers will be provided through a enterprise application such as "Gomez".

However, if you're looking to do this yourself, see my previous answers below or use shell utilities such as: htop, top, w, and utilize Apache server-status

Previous answers before question revision:

What you are asking for is sometimes called application profiling.

You need to create a rough memory formula like:

httpd ram + php memory usage + mysql process usage = total request memory footprint

You will also need a CPU formula, but you can also eyeball top during a load test.

Apache has the command 'ab'.

"ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving." http://httpd.apache.org/docs/2.0/programs/ab.html

Here is a generic 'ab' benchmark command line:

ab -n 10 -c 1 http://www.yoursite.com/
# qty 10 total requests, 1 request at a time

The strategy is to test the per process (user) load on your application from the web page request through completion. If you can identify how much ram Apache, PHP, and MySQL uses for each request, then you can quickly identify your system capacity.

You'll probably have to use a mix of diagnostic tools like vmstat or top or iostat or ps, etc. to take a snapshot of what a number of requests will demand from your system.

Finally, you are going to want to install Xdebug. This tool will help you profile the php side of the application. http://xdebug.org/

Here is IBM's tutorial on installing Xdebug:

http://www.ibm.com/developerworks/opensource/library/os-php-fastapps2/

randy melder
+2  A: 

ab is a bit annoying if your site needs cookies, etc, ab is too simple.

Basically, from my experience in fixing several imploding PHP websites, it usually goes like this :

1) People use MySQL

You can totally use MySQL, facebook and flickr do it (mysql fanboys love those) IF YOU KNOW THE GOTCHAS which are :

  • If you have a non-read-only MyISAM table and any query longer than 100 us (even selects) you are dead

On one site I fixed, the guy had rented a double-quad-core server because "his site needs the power". I look at his site, I look at my previous site with > 100K members and a torrent tracker which ran on a fucking Via C7 micro-half-pizzabox server, and I tell him, your site runs fine on the Celeron 300 that's in my basement, and that's even overkill, I can rent it to you for half the price of your Xeon, lol.

It turned out that the guy was a good developer and a real nice guy but he sucked at MySQL, so his site had the typical Search Query From Hell that can kill any website :

  • 10 search queries from hell per second (he had like 300K members on his illegal warez site)
  • search query from hell takes about 0.1 - 0.2 seconds
  • a little stream of concurrent updates to the same MyISAM table to spice things up

=> total serialization (MyISAM write locks) of all queries. 1 core 100%, 7 cores idle, loadavg > 1000 (yes he was using apache), page times > 30 seconds, the works.

Fix was easy : optimize the search query from hell, fix point 2) below, switch to InnoDB, switch to lighttpd. loadavg dropped to 0.02

2) UPDATEs

Noone is interested in page counters. Issue 1 UPDATE for every page view and you are dead. Add some MyISAM for more effects. Also a killer on InnoDB, not about locking, rather about sync disk IO waits.

3) FULLTEXT

  • MyISAM not usable for read-write tables because of locking.
  • MyISAM is as reliable as a ramdisk (in fact, less : you need an OS crash to corrupt a ramdisk, corrupting MyISAM tables just needs a MySQL crash or just hitting it too much concurrently, you'll get "unknown table engine error", I saw this many times)
  • FULLTEXT not available on InnoDB
  • Any insertion in a FULLTEXT index triggers almost a full index rebuild (when I inserted a forum post it was rebuilding 400 MB of index)

==> If you need full text indexing, performance, and reliability, use Sphinx or Xapian.

I've not tried Sphinx (people say good things about it), but Xapian happily searches through 4GB of text in a snap.

4) People use apache.

This nicely combines with the points above.

Unlike a proper server like lighttpd whose CPU usage is undetectable (the crummy Via C7 was serving 100 HTTP hits/s and lighttpd used less than 1% CPU), apache will kill your box.

When the MySQL starts to die (it dies easily), clients start to hit F5 hard, and soon you have about 1000 apache processes, each holding a PHP interpreter, and each PHP interpreter holds an idle MySQL connection, waiting on a MyISAM lock, except one, which is doing some trivial UPDATE of your page view counter, but that takes some time, because the server is gone to lunch swapping, because of the 1000 apache and 1000 php and 1000 mysql processes.

Lighttpd uses no cpu for static pages. The only way for lighttpd to saturate your CPU is if you hit it hard with apachebench at like 20K requests/s. Then Lighttpd talks to a few, like 10 php-fcgi backends (2-4 per core is good) which talk to a few MySQL connections. Everything is a lot faster as a result, and when overloaded, it degrades gracefully, not explosively.

To get to the original question, you definitely want to profile your SQL queries. Add a query log to your PHP application which displays (only to you), the list of queries and the time they take, and also the time from the start of the PHP script to its end (header/footer includes are a good place for this).

For a complex page (excluding search) you'd expect about 3 ms MySQL and 3 ms PHP, that's a good target. You need a PHP compiled code cache of course.

peufeu