I'm trying to create a program that starts a process pool of, say, 5 processes, performs some operation, and then quits, but leaves the 5 processes open. Later the user can run the program again, and instead of it starting new processes it uses the existing 5. Basically it's a producer-consumer model where:
- The number of producers varies.
- The number of consumers is constant.
- The producers can be started at different times by different programs or even different users.
I'm using the builtin multiprocessing
module, currently in Python 2.6.4., but with the intent to move to 3.1.1 eventually.
Here's a basic usage scenario:
- Beginning state - no processes running.
- User starts
program.py operation
- one producer, five consumers running. - Operation completes - five consumers running.
- User starts
program.py operation
- one producer, five consumers running. - User starts
program.py operation
- two producers, five consumers running. - Operation completes - one producer, five consumers running.
- Operation completes - five consumers running.
- User starts
program.py stop
and it completes - no processes running. - User starts
program.py start
and it completes - five consumers running. - User starts
program.py operation
- one procucer, five consumers running. - Operation completes - five consumers running.
- User starts
program.py stop
and it completes - no processes running.
The problem I have is that I don't know where to start on:
- Detecting that the consumer processes are running.
- Gaining access to them from a previously unrelated program.
- Doing 1 and 2 in a cross-platform way.
Once I can do that, I know how to manage the processes. There has to be some reliable way to detect existing processes since I've seen Firefox do this to prevent multiple instances of Firefox from running, but I have no idea how to do that in Python.