In that kind of situation, a solution if often to not do that kind of heavy work within the Apache processes, but either :
- run an external PHP process, using something like
shell_exec
, for instance -- this is if you must work in synchronous mode (ie, if you cannot execute the task a couple of minutes later)
- push the task to a FIFO system, and immediatly return a message to the user saying "your task will be processed soon"
- and have some other process (launched via a crontab every minute, for instance) check that FIFO queue
- and do the processing it there is something in the queue
- That process, itself, can run in low priority mode.
As often as possible, especially if the heavy calculations take some time, I would go for the second solution :
- It allows users to get some feedback immediatly : "the server has received your request, and will process it soon"
- It doesn't keep Apaches's processes "working" for long : the heavy stuff is done by other processes
- If, one day, you need such an amount of processing power that one server is not enough anymore, this kind of system will be easier to scale : just add a second server that'll pick from the same FIFO queue
- If your server is really too loaded, you can stop processing from the queue, at least for some time, so the load can get better -- for instance, this can be usefull if your critical web-services are used a lot in a specific time-frame.
Another (nice-looking, but I haven't tried it yet) solution would be to use some kind of tool like, for instance, Gearman :
Gearman provides a generic application
framework to farm out work to other
machines or processes that are better
suited to do the work.
It allows you
to do work in parallel, to load
balance processing, and to call
functions between languages.
It can be
used in a variety of applications,
from high-availability web sites to
the transport of database replication
events.
In other words, it is the
nervous system for how distributed
processing communicates.