views:

70

answers:

3

Is there a way to stop the multiprocessing Python module from trying to call & wait on join() on child processes of a parent process shutting down?

2010-02-18 10:58:34,750 INFO calling join() for process procRx1

I want the process to which I sent a SIGTERM to exit as quickly as possible (i.e. "fail fast") instead of waiting for several seconds before finally giving up on the join attempt.

Clarifications: I have a "central process" which creates a bunch of "child processes". I am looking for a way to cleanly process a "SIGTERM" signal from any process in order to bring down the whole process tree.

A: 

Have you tried to explicitly using Process.terminate?

Roberto Liffredo
I have accepted this answer as it is the closest to the scheme I have come up with.
jldupont
A: 

You could try joining in a loop with a timeout (1 sec?) and checking if the thread is still alive, something like:

while True:
  a_thread.join(1)
  if not a_thread.isAlive(): break

Terminating the a_thread will trigger break clause.

Dave
Thanks for your time: I am not interested in waiting for `join`s to come through.
jldupont
A: 

Sounds like setting your subprocess' flag Process.daemon = False may be what you want:

Process.daemon:

The process’s daemon flag, a Boolean value. This must be set before start() is called.
The initial value is inherited from the creating process.
When a process exits, it attempts to terminate all of its daemonic child processes.

Note that a daemonic process is not allowed to create child processes. Otherwise a daemonic process would leave its children orphaned if it gets terminated when its parent process exits. Additionally, these are not Unix daemons or services, they are normal processes that will be terminated (and not joined) if non-dameonic processes have exited.

MikeyB
Thanks for your time: I have a bunch of child processes hence your proposal is not adequate.
jldupont