views:

175

answers:

2

My Perl web-app, running under Apache mod_fastcgi, frequently gets errors like the following:

Maximal count of pending signals (120) exceeded at line 119.

I've seen this happen in relation to file uploads but I'm not sure that's the only time it happens. I also get a SIGPIPE right before (or possibly after) I get that error.

Any thoughts?

EDIT Thanks for the suggestions everyone. Someone asked what line 119 was. Sorry, should have put that in. It's in a block of code where I run the virus checker on an uploaded file. I don't get the error every time, only occasionally.

if(open VIRUS_CK, '|/usr/local/bin/clamscan - --no-summary >'.$tmp_file) {

  print VIRUS_CK $data; // THIS IS LINE 119

  close VIRUS_CK;

  if (($? >> 8) == 1) {

    open VIRUS_OUTPUT, '<'.$tmp_file;
    my $vout = <VIRUS_OUTPUT>;
    close VIRUS_OUTPUT;
    $vout =~ s/^stdin:\s//;
    $vout =~ s/FOUND$//;


    print STDERR "virus found on upload: $vout\n";
    return undef, 'could not accept attachment, virus found: '.$vout;
  }
  unlink($tmp_file);
}
+5  A: 

It means that the operating system is delivering signals to Perl faster than it can handle them, and the saturation point has been reached. Between operations, Perl saves signals to be handled and then handles them once it has a chance. You get this error because too many signals were received before Perl had a chance to catch its breath. This is a fatal error, so your Perl process terminates.

The solution is to figure out what's generating so many signals. See here for more details.


Update: My original answer was somewhat inaccurate, saying that generating a new Perl process was part of the issue, when in fact it wasn't. I've updated based on @ysth's comment below.

John Feminella
nothing to do with starting a new process; perl catches signals and saves them to be dispatched at a safe point between operations, and this error indicates a lot of signals were received before that safe point.
ysth
Do you have any thoughts on how to find out where the signal is coming from?
NXT
Check the logging or debugging options for the FastCGI module in Apache is suggestion.
mctylr
@ysth: Thanks; I've updated accordingly.
John Feminella
+2  A: 

I'll be hand-wavy because I've not used mod_fastcgi in a long time, and it's has been a while since I've looked at its documentation.

I'm guessing that your Perl module is non-forking, but takes a while to run, such that client closes take a while to process. See Notes under FastCGI Apache module mod_fastcgi about Signals used by FastCGI, and how programs may wish to handle those signals, including SIGPIPE.

mctylr
I do nothing on SIGPIPE except print to STDERR that I received a SIGPIPE. I'm not sure how I'd shut down an in-process request anyway.Is that the kind of thing where I should set a flag and check that flag occasionally during any long-running loops (not that I have any)?
NXT
_during any long-running loops (not that I have any)_ I suspect running the clamscan, and waiting for its results is the long delay. If the end-user aborts the FastCGI script, because of being impatient waiting for the virus scanner to run, this generates the SIGPIPE signal, as I understand it. This would also block other web requests, waiting for the Perl script to finish (which is waiting for the virus scanner), and so if those web users also "Stop" or abort their connections, thay additionally generate SIGPIPE signals as well. -- Take this all with a grain of salt, it's been a while.
mctylr