I want to be able to call external command in Python.
Thanks for your help!
I want to be able to call external command in Python.
Thanks for your help!
import os
os.system("your command")
Note that this is dangerous, since the command isn't cleaned. I leave it up to you to google for the relevant docs on the 'os' and 'sys' modules. There are a bunch of functions (exec* , spawn*) that will do similar things.
import os
cmd = 'ls -al'
os.system(cmd)
If you want to return the results of the command you need os.popen:
Look at the subprocess module in the sdtlib:
from subprocess import call
call(["ls", "-l"])
The advantage of subprocess vs system is that it is more flexible (you can get the stdout, stderr, the "real" status code, better error handling, etc...). I think os.system is deprecated, too, or will be:
http://www.python.org/doc/2.5/lib/node534.html
For quick/dirty/one time scripts, os.system is enough, though.
I'd recommend using the subprocess module instead of os.system because it does shell escaping for you and is therefore much safer: http://docs.python.org/lib/module-subprocess.html
subprocess.call(['ping', 'localhost'])
http://www.python.org/doc/2.5/lib/module-subprocess.html
...or for a very simple command:
import os
os.system('cat testfile')
os.system has been superceeded by the subprocess module. Use subproccess instead.
os.system is OK, but kind of dated. It's also not very secure. Instead, try subprocess. subprocess does not call sh directly and is therefore more secure than os.system.
Get more information at http://docs.python.org/lib/module-subprocess.html
Here's a summary of the ways to call external programs and the advantages and disadvantages of each:
os.system("some_command with args")
passes the command and arguments to your system's shell. This is nice because you can actually run multiple commands at once in this manner and set up pipes and input/output redirection. For example,
os.system("some_command < input_file | another_command > output_file")
However, while this is convenient, you have to manually handle the escaping of shell characters such as spaces, etc. On the other hand, this also lets you run commands which are simply shell commands and not actually external programs.
http://docs.python.org/lib/os-process.html
stream = os.popen("some_command with args")
will do the same thing as os.system
except that it gives you a file-like object that you can use to access standard input/output for that process. There are 3 other variants of popen that all handle the i/o slightly differently. If you pass everything as a string, then your command is passed to the shell; if you pass them as a list then you don't need to worry about escaping anything.
http://docs.python.org/lib/os-newstreams.html
The Popen
class of the subprocess
module. This is intended as a replacement for os.popen
but has the downside of being slightly more complicated by virtue of being so comprehensive. For example, you'd say
print Popen("echo Hello World", stdout=PIPE, shell=True).stdout.read()
instead of
print os.popen("echo Hello World").read()
but it is nice to have all of the options there in one unified class instead of 4 different popen functions.
http://docs.python.org/lib/node528.html
The call
function from the subprocess
module. This is basically just like the Popen
class and takes all of the same arguments, but it simply wait until the command completes and gives you the return code. For example:
return_code = call("echo Hello World", shell=True)
http://docs.python.org/lib/node529.html
The os module also has all of the fork/exec/spawn functions that you'd have in a C program, but I don't recommend using them directly.
The subprocess
module should probably be what you use.
I typically use:
import subprocess
p = subprocess.Popen('ls', shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line,
retval = p.wait()
You are free to do what you want with the stdout data in the pipe. In fact, you can simply omit those parameters (stdout= and stderr=) and it'll behave like os.system().
There is another difference here which is not mentioned above.
subprocess.Popen executes the as a subprocess. In my case, I need to execute file which needs to communicate with another program .
I tried subprocess, execution was successful. However could not comm w/ . everything normal when I run both from the terminal.
One more: (NOTE: kwrite behaves different from other apps. If you try below with firefox results will not be the same)
If you try os.system("kwrite"), program flow freezes until user closes kwrite. To overcome that I tried instead os.system(konsole -e kwrite). This time program continued to flow but kwrite became the subprocess of the konsole.
Anyone runs the kwrite not being a subprocess (i.e. at the system monitor it must be appear at the leftmost edge of the tree)
Thanks
Some hints on detaching the child process from the calling one (starting the child process in background).
Suppose you want to start a long task from a CGI-script, that is the child process should live longer than the CGI-script execution process.
The classical example from the subprocess module docs is:
import subprocess
import sys
# some code here
pid = subprocess.Popen([sys.executable, "longtask.py"]) # call subprocess
# some more code here
The idea here is that you do not want to wait in the line 'call subprocess' until the longtask.py is finished. But it is not clear what happens after the line 'some more code here' from the example.
My target platform was freebsd, but the development was on windows, so I faced the problem on windows first.
On windows (win xp), the parent process will not finish until the longtask.py has finished its work. It is not what you want in CGI-script. The problem is not specific to Python, in PHP community the problems are the same.
The solution is to pass DETACHED_PROCESS flag to the underlying CreateProcess function in win API. If you happen to have installed pywin32 you can import the flag from the win32process module, otherwise you should define it yourself:
DETACHED_PROCESS = 0x00000008
pid = subprocess.Popen([sys.executable, "longtask.py"],
creationflags=DETACHED_PROCESS).pid
On freebsd we have another problem: when the parent process is finished, it finishes the child processes as well. And that is not what you want in CGI-script either. Some experiments showed that the problem seemed to be in sharing sys.stdout. And the working solution was the following:
pid = subprocess.Popen([sys.executable, "longtask.py"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
I have not checked the code on other platforms and do not know the reasons of the behaviour on freebsd. If anyone knows, please share your ideas. Googling on starting background processes in Python does not shed any light yet.
I have a python script where I wanted to execute some command, so I used above process, and its working fine from command line but when the same python script I tried executing inside some php file, command execution not working inside python script.
I tried with all of the above option, still not working :(
Help me out!
Hi, I am trying to run the script like
import os
output = os.system('php testfile.php')
testfile.php contains the code
<?php
print("test");
?>
But the output comes 0. How am i able to get proper output?
Check "pexpect" python library, too. It allows for interactive controlling of external programs/commands, even ssh, ftp, telnet etc. You can just type something like:
child = pexpect.spawn('ftp 192.168.0.24')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')