views:

279

answers:

5

I've got a PHP script that I call to run MySQL database backups to .sql files, TAR/GZip them and e-mail them to me. One of the database is hosted by a different provider than the one providing the web server. Everything is hosted on Linux/Unix. When I run this command:

$results = exec("mysqldump -h $dbhost -u $dbuser -p$dbpass $dbname > $backupfile", $output, $retval);

(FYI, I've also tried this with system(), passthru() and shell_exec().)

My browser loads the page for 15-20 seconds and then stops without processing. When I look at the server with an FTP client, I can see the resulting file show up a few seconds later and then the file size builds until the database is backed up. So, the backup file is created but the script stops working before the file can be compressed and sent to me.

I've checked the the max_execution_time variable in PHP and it's set to 30 seconds (longer than it takes for the page to stop working) and have set the set_time_limit value to as much as 200 seconds.

Anyone have any idea what's going on here?

+1  A: 

Are you on shared hosting or are these your own servers? If the former your hosting provider may have set the max execution time to 15-20secs and set it so it cannot be overridden (I have this problem with 1&1 and these type of scripts).

Paolo
I'm on a shared host but it looks like the default max execution time is set to more time than it takes for the script to fail.
Kevin
+1  A: 

Re-check the execution-time-related parameters with a phpinfo() call... maybe it's all about what Paolo writes.

Alfabravo
Thanks for your comment. I checked phpinfo() and confirmed that the max execution time is set to 30 seconds, which is longer than the script runs. The time it takes for the page to fail once it's requested is less than 30 seconds. Generally around 15-20.
Kevin
+1  A: 

Could also be a (reverse) proxy that is giving up after a certain period of inactivity. Granted it's a long shot but anyway.... try

// test A
$start = time();
sleep(20);
$stop = time();
echo $start, ' ', $stop;

and

// test B
for($i=0; $i<20; $i++) {
  sleep(1);
  echo time(), "\n";
}

If the first one times out and the second doesn't I'd call that not proof but evidence.

VolkerK
Thanks for your idea. I tried both tests and they BOTH worked without a hitch.
Kevin
A: 
  1. Do a manual dump and diff it against the broken one. This may tell you at which point mysqldump stops/crashes
  2. Consider logging mysqldump output, as in mysqldump ... 2>/tmp/dump.log
  3. Consider executing mysqldump detached so that control is returned to PHP before the dump is finished

On a side note, it is almost always a good idea to mysqldump -Q

mst
+1  A: 

Maybe the provider has set another resource limit beyond the php.ini setting. Try

<?php passthru('ulimit -a');

If the command is available it should print a list of resources and their limits, e.g.

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 4095
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 4095
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Maybe you find some more restrictive settings than that on your shared server.

VolkerK