views:

89

answers:

4

I'm working on a PHP web application that let's users network with each other, book events, message etc... I launched it a few months ago and at the moment, there's only about 100 users.

I set up the application on a VPS with ubuntu 9.10, apache 2, mysql 5 and php 5. I had 360 Mb of RAM, but upgraded to 720 MB a few minutes ago.

Lately, my web application has been experiencing outages due to excessive memory usage. From what I can tell in error logs, it seems the server automatically kills apache processes that consume too much memory. As a result, I upgraded memory from 360 MB to 720 MB as a stop-gap measure.

So my question is, how do I go about resolving these outage issues? How do I know if my website's need for more memory is due to poor code or if it's part of the website's natural growth? What's the most efficient way to determine which PHP scripts consume the most memory?

+1  A: 

Here is a tool that supports profiling PHP: http://xdebug.org/docs/profiler

Achilles
A: 

Depending on your version of httpd & php, httpd could be holding on to segents of memory that it doesn't need, and growing the running process size unnecesarily. I have a box that does this, and I've solved the issue by doing an httpd restart on a nightly basis like so:

30 00 * * *     /httpd/sbin/apachectl restart
31 00 * * *     /httpd/sbin/apachectl start
35 00 * * *     /httpd/sbin/apachectl start
40 00 * * *     /httpd/sbin/apachectl start

As you can see, I follow up my restart with 3 trailing starts, just in case apache fails to come back to life after the restart. 3 is probably overkill, but on the other hand, it doesn't hurt anything, so why not.

Zak
Zak
Also, it *is* possible that I get a process that doesn't die during the time period that the restart attempts to bring apache back up. More specifically, I've had apache die during a midnight graceful, due to a process taking an exorbitant amount of time to die. Again, If you're going to downvote, I'd appreciate a reason why, or at least a better way of doing things...
Zak
It's getting downvoted because it is not a solution it is a workaround. There's lots of places to look to identify such leaks - which are very unlikely to be anything to do with the reported problem where the zvar heap is getting filled up.
symcbean
The question is: "how do I go about resolving these outage issues?" This may not be the best way, but it is a way that works to reclaim the leaked mem. It's like saying duct tape on a radiator hose isn't a repair, it's a workaround. Go ahead and sweat to death in the desert and refuse to use the duct tape then.
Zak
A: 

Another thing you can do if your process size is small is turn down your httpd settings to have min spare servers of 1, and max spare servers of 3. If you only have 100 users, this should be fine, as you won't care about the overhead of starting processes for only a few users.

Zak
+1  A: 

There's no simple answer to this, although I would suspect that it may a problem in your code.

What is the memory_limit setting in your php.ini file? Typically I'd recommend at least 4Mb, and usually 16. How many concurrent hits are you fielding? Is the site doing a lot of heavy reporting of stats? Or rendering of images via PHP? Do you use file_get_contents() anywhere?

You really need to set up some custom logging to report for each URL, the size of the log file at exit. e.g. you could auto-prepend:

<?php
register_shutdown_function('log_mem');

function log_mem()
{
   fputs(STDERR, '[' . date('c') . '] ' . memory_get_usage() . ' ' 
     . $_SERVER["REQUEST_URI"] . "\n");
}

(Note - no closing tag) This will write out the memory used by each PHP page to the error_log so you can isolate the problem more easily.

HTH

C.

symcbean