views:

493

answers:

5

I need to launch a number of long-running processes with subprocess.Popen, and would like to have the stdout and stderr from each automatically piped to separate log files. Each process will run simultaneously for several minutes, and I want two log files (stdout and stderr) PER process to be written to as the processes run.

Do I need to continually call p.communicate() on each process in a loop in order to update each log file, or is there some way to invoke the original Popen command so that stdout and stderr are automatically streamed to open file handles?

thanks in advance.

+3  A: 

You can pass stdout and stderr as parameters to Popen()

subprocess.Popen(self, args, bufsize=0, executable=None, stdin=None, stdout=None,
                 stderr=None, preexec_fn=None, close_fds=False, shell=False,
                 cwd=None, env=None, universal_newlines=False, startupinfo=None, 
                 creationflags=0)

For example

>>> import subprocess
>>> with open("stdout.txt","wb") as out:
...  with open("stderr.txt","wb") as err:
...   subprocess.Popen("ls",stdout=out,stderr=err)
... 
<subprocess.Popen object at 0xa3519ec>
>>> 
gnibbler
+5  A: 

Per the docs,

stdin, stdout and stderr specify the executed programs’ standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None.

So just pass the open-for-writing file objects as named arguments stdout= and stderr= and you should be fine!

Alex Martelli
Thanks. I could have swore I tried that before and got an error, but that's exactly what I was hoping would work.
mwendell
That doesn't work for me. I am simultaneously running two processes and save the stdout and stderr from both into one log file. If the output gets too big, one of the subprocesses hangs; don't know which. I can't use formatting in a comment so I'll append an "answer" below.
jasper77
A: 

hello, I too need to do the same here is my sample code

        logfile = open(path,'w+')
        self.proc = subprocess.Popen(self.args, shell=False, cwd=None, 
            stdout=logfile, stderr=logfile)

        print "self.proc.stdout", self.proc.stdout
        pob = select.poll()
        pob.register(self.proc.stdout),
        pob.register(self.proc.stderr)
        fdd = { self.proc.stdout.fileno(): self.proc.stdout ,
                self.proc.stderr.fileno(): self.proc.stderr }

but I get the print as self.proc.stdout None

and traceback as

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 525, in __bootstrap_inner
    self.run()
  File "bin/addons/base_quality_interrogation.py", line 139, in run
    pob.register(self.proc.stdout),
TypeError: argument must be an int, or have a fileno() method.

This is not working for me

can you please help me

thanks

Naresh
This looks like it should work. What is the value of `self.proc.stdout` where the error is happening?
Noufal Ibrahim
A: 

hello,

Thanks for your reply

and yes as in my code you can see the print statement the value prints as None

print "self.proc.stdout", self.proc.stdout

o/p as self.proc.stdout None

Thanks

Naresh
A: 

I am simultaneously running two subprocesses, and saving the output from both into a single log file. I have also built in a timeout to handle hung subprocesses. When the output gets too big, the timeout always triggers, and none of the stdout from either subprocess gets saved to the log file. The answer posed by Alex above does not solve it.

# Currently open log file.
log = None

# If we send stdout to subprocess.PIPE, the tests with lots of output fill up the pipe and
# make the script hang. So, write the subprocess's stdout directly to the log file.
def run(cmd, logfile):
   #print os.getcwd()
   #print ("Running test: %s" % cmd)
   global log
   p = subprocess.Popen(cmd, shell=True, universal_newlines = True, stderr=subprocess.STDOUT, stdout=logfile)
   log = logfile
   return p


# To make a subprocess capable of timing out
class Alarm(Exception):
   pass

def alarm_handler(signum, frame):
   log.flush()
   raise Alarm


####
## This function runs a given command with the given flags, and records the
## results in a log file. 
####
def runTest(cmd_path, flags, name):

  log = open(name, 'w')

  print >> log, "header"
  log.flush()

  cmd1_ret = run(cmd_path + "command1 " + flags, log)
  log.flush()
  cmd2_ret = run(cmd_path + "command2", log)
  #log.flush()
  sys.stdout.flush()

  start_timer = time.time()  # time how long this took to finish

  signal.signal(signal.SIGALRM, alarm_handler)
  signal.alarm(5)  #seconds

  try:
    cmd1_ret.communicate()

  except Alarm:
    print "myScript.py: Oops, taking too long!"
    kill_string = ("kill -9 %d" % cmd1_ret.pid)
    os.system(kill_string)
    kill_string = ("kill -9 %d" % cmd2_ret.pid)
    os.system(kill_string)
    #sys.exit()

  end_timer = time.time()
  print >> log, "closing message"

  log.close()
jasper77