views:

85

answers:

2

Again, the same question. The reason is - I still can't make it work after reading the following:

http://stackoverflow.com/questions/1085071/real-time-intercepting-of-stdout-from-another-process-in-python

http://stackoverflow.com/questions/527197/intercepting-stdout-of-a-subprocess-while-it-is-running

http://stackoverflow.com/questions/874815/how-do-i-get-real-time-information-back-from-a-subprocess-popen-in-python-2-5

http://stackoverflow.com/questions/1606795/catching-stdout-in-realtime-from-subprocess

My case is that I have a console app written in C, lets take for example this code in a loop:

tmp = 0.0;   
printf("\ninput>>"); 
scanf_s("%f",&tmp); 
printf ("\ninput was: %f",tmp); 

it continuously reads some input and writes some output.

My python code to interact with it is the following:

p=subprocess.Popen([path],stdout=subprocess.PIPE,stdin=subprocess.PIPE)
p.stdin.write('12345\n')
for line in p.stdout: 
    print(">>> " + str(line.rstrip())) 
    p.stdout.flush() 

So far whenever I read form p.stdout it alwais waits until the process is terminated and then outputs an empty string. I'v tride lots of stuff - but still the same result.

I tried Python 2.6 and 3.1, but the version doesn't matter - I just need to make it work somewhere.

Thanks in advance.

+1  A: 

Trying to write to and read from pipes to a sub-process is tricky because of the default buffering going on in both directions. It's extremely easy to get a deadlock where one or the other process (parent or child) is reading from an empty buffer, writing into a full buffer or doing a blocking read on a buffer that's awaiting data before the system libraries flush it.

More modest amounts of data the Popen.communicate() method might be sufficient. However, for data that exceeds its buffering you'd probably get stalled processes (similar to what you're already seeing?)

You might want to look for details on using the fcntl module and making one or the other (or both) of your file descriptors non-blocking. In that case, of course, you'll have to wrap all reads and/or writes to those file descriptors in the appropriate exception handling to handle the "EWOULDBLOCK" events. (I don't remember the exact Python exception that's raised for these).

A completely different approach would be for your parent to use the select module and os.fork() ... and for the child process to execve() the target program after directly handling any file dup()ing. (Basically you'd be re-implement parts of Popen() but with different parent file descriptor (PIPE) handling.

Jim Dennis
@Jim: Could you provide an example implementation? I just played a bit with the `Queue` module, but it doesn't work at all.
Philipp
A: 

Push reading from the pipe into a separate thread that signals when a chunk of output is available:

http://stackoverflow.com/questions/3076542/how-can-i-read-all-availably-data-from-subprocess-popen-stdout-non-blocking/3078292#3078292

ddotsenko