views:

811

answers:

5

The Disclaimer

First of all, I know this question (or close variations) have been asked a thousand times. I really spent a few hours looking in the obvious and the not-so-obvious places, but there may be something small I'm missing.

The Context

Let me define the problem more clearly: I'm writing a newsletter app in which I want the actual sending process to be async. As in, user clicks "send", request returns immediately and then they can check the progress in a specific page (via AJAX, for example). It's written in your traditional LAMP stack.

In the particular host I'm using, PHP's exec() and system() are disabled for security reasons, but Perl's system functions (exec, system and backticks) aren't. So my workaround solution was to create a "trigger" script in Perl that calls the actual sender via the PHP CLI, and redirects to the progress page.

Where I'm Stuck

The very line the calls the sender is, as of now:

system("php -q sender.php &");

Problem being, it's not returning immediately, but waiting for the script to finish. I want it to run in the background and have the system call itself return right away. I also tried running a similar script in my Linux terminal, and in fact the prompt doesn't show until after the script has finished, even though my test output doesn't run, indicating it's really running in the background.

What I already tried

  • Perl's exec() function - same result of system().
  • Changing the command to: "php -q sender.php | at now"), hoping that the "at" daemon would return and that the PHP process itself wouldn't be attached to Perl.
  • Executing the command 'indirectly': "/bin/sh -c 'php -q sender.php &'" - still waits until sender.php is finished sending.
  • fork()'ing the process and executing the system call in the child (hopefully detached process) - same result as above

My test environment

Just to be sure that I'm not missing anything obvious, I created a sleeper.php script which just sleeps five seconds before exiting. And a test.cgi script that is like this, verbatim:

#!/usr/local/bin/perl
system("php sleeper.php &");
print "Content-type: text/html\n\ndone";

What should I try now?

+3  A: 

Use fork() and then call system in the child process.

my $pid = fork();
if (defined $pid && $pid == 0) {
    # child
    system($command);    # or exec($command)
    exit 0;
}
# parent
# ... continue ...
mobrule
exec (still followed by exit) would be the correct choice in a fork.
Hasturkun
Tried that now, but the main script also doesn't exit until after the called one is terminated.
Rafael Almeida
+8  A: 

Essentially you need to 'daemonize' a process -- fork off a child, and then entirely disconnect it from the parent so that the parent can safely terminate without affecting the child.

You can do this easily with the CPAN module Proc::Daemon:

use Proc::Daemon;
# do everything you need to do before forking the child...

# make into daemon; closes all open fds
Proc::Daemon::Init();
Ether
+1 - good module to know!
DVK
A: 

Managed to solve the problem. Apparently what was keeping it from returning was that calling the sender that way didn't disconnect the stdout. So, the solution was simply changing the system call to:

system("php sender.php > /dev/null &");

Thanks everybody for the help. In fact, it was while reading the whole story about "daemonizing" a process that I got the idea to disconnect the stdout.

Rafael Almeida
I would not recommend using shell backgrounding to solve this problem -- it relies on your shell implementation and is not particularly portable. `fork` and `exec` is a much cleaner solution, IMHO.
friedo
I understand. But in this case, simplicity is by far the main constraint. I know the fork/exec method should work right away (and I got it 'noted' for future use), but as of right now I'm going with the first simplest solution that works =)
Rafael Almeida
+1  A: 

Another option would be to set up a gearman server and a worker process (or processes) that do the emailing. That way you control how much emailing is going on simultaneously, and no forking is necessary. The client (your program) can add a task to the gearman server (in the background without waiting for a result if desired), and the jobs are queued until the server passes the job to an available worker. There are perl and php APIs to gearman, so it's very convenient.

runrig
+1  A: 

Sometimes STDERR and STDOUT can also lock the system... To get both, I use (for most shell environments (bash, csh, etc) that I use...):

system("php sender.php > /dev/null 2>&1 &");
SirGCal
You're right, in some cases this "detail" might be decisive too =)
Rafael Almeida