views:

62

answers:

2

Hi all,
I'm developing an application in Python on Ubuntu and I'm running external binaries from within python using subprocess. Since these binaries are generated at run time and can go rogue, I need to keep a strict tab on the amount of memory footprint and runtime of these binaries. Is there someway I can limit or monitor the memory usage of these binary programs at runtime? I would really hate to use something like "ps" in subprocess for this purpose.

+1  A: 

Having a PID number of your subprocess you can read all info from proc file-system. Use:

/proc/[PID]/smaps (since Linux 2.6.14) This file shows memory consumption for each of the process's mappings. For each of mappings there is a series of lines as follows:

or

/proc/[PID]/statm Provides information about memory usage, measured in pages.

Alternatively you can limit resources which subprocess can aquire with :

subprocess.Popen('ulimit -v 1024; ls', shell=True)

When given virtual memory limit is reached process fails with out of memory.

gertas
I need to implement this from within Python. So I would need to fork a child process, and simultaneously run a while loop which keeps monitoring /proc/(pid)/smaps file. Isn't there some other way, ex: I can allocate fixed memory for a child subprocess? I was hoping to use python's memory manager somehow.
Neo
Dang, never thought of this. Yep, this should work just fine for me.
Neo
A: 

You can use Python's resource module to set limits before spawning your subprocess.

For monitoring, resource.getrusage() will give you summarized information over all your subprocesses; if you want to see per-subprocess information, you can do the /proc trick in that other comment (non-portable but effective), or layer a Python program in between every subprocess and figure out some communication (portable, ugly, mildly effective).

Habbie
thanks Habbie, thats exactly what I needed.
Neo