views:

149

answers:

5

I'm getting an error in a program that is supposed to run for a long time that too many files are open. Is there any way I can keep track of which files are open so I can print that list out occasionally and see where the problem is?

+1  A: 

I'd guess that you are leaking file descriptors. You probably want to look through your code to make sure that you are closing all of the files that you open.

Adam Crossland
I figured that's what the problem was. However the code is very complex, and this would be an easy way to immediately spot which files aren't being closed.
Claudiu
+2  A: 

On Windows, you can use Process Explorer to show all file handles owned by a process.

interjay
+3  A: 

On Linux, you can use lsof to show all files opened by a process.

eduffy
+4  A: 

I ended up wrapping the built-in file object at the entry point of my program. I found out that I wasn't closing my loggers.

import __builtin__
openfiles = set()
oldfile = __builtin__.file
class newfile(oldfile):
    def __init__(self, *args):
        self.x = args[0]
        print "### OPENING %s ###" % str(self.x)            
        oldfile.__init__(self, *args)
        openfiles.add(self)

    def close(self):
        print "### CLOSING %S ###" % str(self.x)
        oldfile.close(self)
        openfiles.remove(self)
oldopen = __builtin__.open
def newopen(*args):
    return newfile(*args)
__builtin__.file = newfile
__builtin__.open = newopen

def printOpenFiles():
    print "### %d OPEN FILES: [%s]" % (len(openfiles), ", ".join(f.x for f in openfiles))
Claudiu
+3  A: 

On Linux, you can look at the contents of /proc/self/fd:

$ ls -l /proc/self/fd/
total 0
lrwx------ 1 foo users 64 Jan  7 15:15 0 -> /dev/pts/3
lrwx------ 1 foo users 64 Jan  7 15:15 1 -> /dev/pts/3
lrwx------ 1 foo users 64 Jan  7 15:15 2 -> /dev/pts/3
lr-x------ 1 foo users 64 Jan  7 15:15 3 -> /proc/9527/fd
Mike DeSimone