tags:

views:

144

answers:

3

Hello,

I'm trying to use python to launch a command in multiple seperate instances of terminal simultaneously. What is the best way to do this? Right now I am trying to use the subprocess module with popen which works for one command but not multiple.

Thanks in advanced.

Edit:

Here is what I am doing:

from subprocess import*

Popen('ant -Dport='+str(5555)+ ' -Dhost='+GetIP()+ ' -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8', shell=True)

The problem for me is this launches a java process in the terminal which I want to have keep running indefinatley. Secondly, I want to run a similar command multiple times in multiple different processes.

+1  A: 

This should stay open as long as the process is running. If you want to launch multiple simultanously, just wrap it in a thread

untested code, but you should get the general idea:


class PopenThread(threading.Thread):

    def __init__(self, port):
        threading.Thread.__init__(self)
        self.port=port

    def run(self):
        Popen('ant -Dport='+str(self.port)+ ' -Dhost='+GetIP()+ 
                ' -DhubURL=http://192.168.1.113:4444' 
                ' -Denvironment=*firefox launch-remote-control'
                ' $HOME/selenium-grid-1.0.8', shell=True)

if '__main__'==__name__:
    PopenThread(5555).start()
    PopenThread(5556).start()
    PopenThread(5557).start()

knitti
A: 

The simple answer I can come up with is to have Python use Popen to launch a shell script similar to:

gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP1 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
gnome-terminal --window -e 'ant -Dport=5555 -Dhost=$IP2 -DhubURL=http://192.168.1.113:4444 -Denvironment=*firefox launch-remote-control $HOME/selenium-grid-1.0.8' &
disown
# etc. ...

There's a fully-Python way to do this, but it's ugly, only works on Unix-like OSes, and I don't have time to write the code out. Basically, subprocess.Popen doesn't support it because it assumes you want to either wait for the subprocess to finish, interact with the subprocess, or monitor the subprocess. It doesn't support the "just launch it and don't bother me with it ever again" case.

The way that's done in Unix-like OSes is to:

  • Use fork to spawn a subprocess
  • Have that subprocess fork a subprocess of its own
  • Have the grandchild process redirect I/O to /dev/null and then use one of the exec functions to launch the process you really want to start (might be able to use Popen for this part)
  • The child process exits.
  • Now there's no link between the grandparent and grandchild, so if the grandchild terminates you don't get a SIGCHLD signal, and if the grandparent terminates it doesn't kill all the grandchildren.

I might be off in the details, but that's the gist. Backgrounding (&) and disowning in bash are supposed to accomplish the same thing.

Mike DeSimone
A: 

Here is a poor version of a blocking queue. You can fancify it with collections.deque or the like, or go even fancier with Twisted deferreds, or what not. Crummy parts include:

  • blocking
  • kill signals might not propagate down

season to taste!

import logging
basicConfig = dict(level=logging.INFO, format='%(process)s %(asctime)s %(lineno)s %(levelname)s %(name)s %(message)s')
logging.basicConfig(**basicConfig)
logger = logging.getLogger({"__main__":None}.get(__name__, __name__))

import subprocess

def wait_all(list_of_Popens,sleep_time):
    """ blocking wait for all jobs to return.

    Args:
        list_of_Popens. list of possibly opened jobs

    Returns:
        list_of_Popens. list of possibly opened jobs

    Side Effect:
        block until all jobs complete.
    """
    jobs = list_of_Popens
    while None in [j.returncode for j in jobs]:
        for j in jobs:  j.poll()
        logger.info("not all jobs complete, sleeping for %i", last_sleep)
        time.sleep(sleep_time)  

    return jobs


jobs = [subprocess.Popen('sleep 1'.split()) for x in range(10)]
jobs = wait_all(jobs)
Gregg Lind