tags:

views:

260

answers:

5

Ok, so, I got the signal through but for some reason the process exists after it receives the signal.

If I add an endless loop (while(1) ) before I even create the socket then it works as prescribed. So... something in my socket code is quitting when the kill command is issued.

I don't see what it would be though. Without the kill, the process sits there indefinitely accepting connections and sending messages to the clients. Why would the kill (and the incrementation of the variable that follows) provoke the socket to get out of it's loop and let the process end?

The socket code is beneath...

[EDITED again]

$SIGNAL = 0;
        sub sigHandler{ 
            #&logData("SIGNALED");
            $SIGNAL++ ;
        }
        $SIG{"USR1"}=\&sigHandler;



        # Create a new socket, on port 9999
        my $PORT = 9999;
        print ("opening connection on port $PORT");
        $lsn = new IO::Socket::INET(Listen => 1, 
                                    LocalPort => $PORT,
                                    Reuse => 1,
                                    Proto => 'tcp' );
           #or die ("Couldn't start server: $!");



        # Create an IO::Select handler
        $sel = new IO::Select( $lsn );

        # Close filehandles

        close(STDIN); close(STDOUT);

        warn "Server ready.  Waiting for connections . . .  on \n";


        # Enter into while loop, listening to the handles that are available.
        # this SHOULD be an infinite loop... I don't see why it would eval to false when 
        # I send a signal to increment $SIGNAL by one.
        while( @read_ready = $sel->can_read ) {
            $MESSAGE = 0;
            $fh  = $read_ready[0];

                # Create a new socket
                if($fh == $lsn) {
                    $new = $lsn->accept;
                    $sel->add($new);
                    push( @data, fileno($new) . " has joined.");
                    warn "Connection from " . $new->peerhost . ".\n";

                }

                # Handle connection
                else {

                    $input = <$fh>;
                    chomp $input;
                    warn "GOT INPUT '$input'\n";

                    if($input eq "<policy-file-request/>"){
                        $MESSAGE = 
                            qq~<?xml version="1.0"?>
                            <cross-domain-policy>
                              <allow-access-from domain="*" to-ports="*"/>
                            </cross-domain-policy>\0~;
                        $SIGNAL++;
                    }


                    if ( $input eq '') {#disconnection notification by client

                        warn "Disconnection from " . $new->peerhost . ".\n";                
                        $sel->remove($fh);
                        $fh->close;
                    }
                    if ( $input eq 'READY'){
                        warn "CLIENT READY = 1\n";
                        $CLIENT_READY = 1;
                    }

                }



            # Write to the clients that are available
            foreach $fh ( @write_ready = $sel->can_write(0) ) {
                if($MESSAGE == 0){
                    #set message here based on criteria
                    $MESSAGE = "UPDATE";
                }
                warn "outside send if\n";
                if($CLIENT_READY == 1 && $SIGNAL > 0){
                    warn ("sending $MESSAGE to $fh\n");         
                    $CLIENT_READY = 0;          
                    $SIGNAL--;
                    print $fh "$MESSAGE\0" or warn "can't send message to $fh";  
                }             
            }
        }


        warn "Server ended.\n";
+5  A: 

If you install a signal handler in your perl script, it does not have to end the script. (Do not exit the handler by calling die) Note that you do not need to send SIGINT or SIGKILL, you can also send SIGUSR1.

Edit The commandline to your kill command has a comma behind -USR1 that should not be there, (it is kill -USR1 4169)

Edit 2 The while (can_read) loop probably exists when it receives an empty array of file handles when it is interrupted by the signal. You could prevent this by having the condition:

while ((@read_ready = $sel->can_read) || 0 < $sel->count ) {

and update the loop to handle an empty @ready array.

rsp
in the link you quote there is no mention of SIGUSR1, however, the link I post (to the initial question I had asked in the other post) does use SIGUSR1, but there must be a typo in it because it doesn't work as described and I'm not clear on why SIGUSR1 would prevent kill from actually ending the process.
Dr.Dredel
Kill can actually be used to send any signal (it just defaults to SIGTERM). Most signals can be caught and handled by the process receiving them, which does not directly cause the app to terminate (SIGKILL is an exception here).
Adam Luchjenbroers
ok, well, I'm obviously doing something *almost right, but not quite. I'll edited my post to show what I'm doing and what I'm seeing. If you can see what my error is, I'd appreciate the correction.
Dr.Dredel
If you use kill to send a USR1 signal there is no default handler trying to end the process you send it to, with INT there is.
rsp
thanks... before I close this I'm hoping someone can identify the issue with what's causing the socket to close when I send the kill signal.
Dr.Dredel
+1  A: 

Although this may not be a part of your requirements, if you wanted to call multiple methods, you could setup a remote procedure call (RPC) interface. Something like Event::RPC, but that may be overkill.

Adam Flott
+3  A: 

Just to expand on the previous answers/comments. You can write a handler for SIGUSR1/2 to catch external signals and run corresponding subroutines in your script. All scriptB has to do is call "kill -SIGUSR[12] {pidof scriptA}" to invoke the handler in scriptA.

A sample signal handler would be something like -

#!/usr/bin/perl
use strict;
use warnings;

$SIG{USR1} = sub { print "Caught USR1\n"; };
$SIG{USR2} = sub { print "Caught USR2\n"; };

while (sleep 5) {}

Make sure that you do not have a die/exit call in your signal handling subroutines so that the program can continue uninterrupted after the call is handled.

[EDIT] After taking a second look at the question I think that Noufal's approach would be a better option since scriptA already has a socket open. Why don't you get scriptB to communicate with scriptA over the socket and redirect the control flow based on your needs.

[EDIT2] I had a look at the underlying IO::Select code for can_read and apparently there is a call to select() to get the read-ready file descriptors. In your case since the timeout value is undef, the select call blocks. According to the documentation on perldoc, select getting restarted after a signal handler is implementation dependent. In your case(and mine) the select() call is 'not' restarted. Hence an empty array is returned to the while condition check resulting in the server exiting.

A quick hack around this problem is -

while ((@read_ready = $sel->can_read) || 1) {
    next if @read_ready == 0;
    ...
}

This is the point where I'd consider scrapping everything and rethinking the design flow.

muteW
I may very well do that... but I'd still like to understand how the signaling methodology is meant to work.. I've spent so much time on it already, would be a shame to just scrap it without figuring it out
Dr.Dredel
By all means, I believe you should make the effort to understand how signals work. After you do so, you should definitely scrap it and use another IPC mechanism (i.e. named semaphores or sockets). You may encounter serious program errors given the non-reentrant nature of many system calls.
Adam
+4  A: 

I don't understand if I'm misreading your question or if you're overlooking the obvious. I wouldn't rely on signals for doing something like this.

I'd have scriptA listening to a socket and have scriptB send a message to scriptA (rather than a signal). If it receives the right message, it would write out the relevant data to all the clients connected to it.

Noufal Ibrahim
+1  A: 

By using signals, you could introduce some subtle bugs depending on what scriptA is doing when the signal is delivered. I would recommend, instead, that you add a check in the scriptA mainloop to that looks for communication from scriptB.

I would further recommend that this communication take the form of a named semaphore that both scripts open. scriptB can post the semaphore when appropriate, and scriptA can perform a non-blocking $sem->trywait() at some point in its loop. See POSIX::RT::Semaphore for more information on the exact syntax.

Using this method, scriptA gets to decide when it is appropriate to handle the instruction from scriptB, and you avoid all the nasty issues that can arise from subroutines being interrupted by asynchronous signal delivery.

For more signals, and the subtleties thereof, check out the Perl Cookbook and Programming Perl.

Adam