views:

188

answers:

1

I have a long running process which monitors the system and prints periodic logs. If I let it run for longer than 10-15 minutes, it exits with a message saying: Too many open files.

The program is setup using real time timer_create() and timer_settime() which raise a SIGUSR1 every 2 seconds. In the handler, there is one fork()-exec() in child There is a wait in parent and subsequent mmap() and stream operations on /proc/acpi/battery/state and /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq and scaling_setspeed files. I have taken care to close the stream FILE * pointers in the periodic signal handler and at all other places. I have also ensured munmap() of all the mapped files.

How can I get around this? Should I increase the maximum file descriptors allowed or should I increase the maximum open files shown by ulimit -aS? Why is this happening if I am closing all the FILE * using fclose()?

Here are the values for my system as of now:

#cat /proc/sys/fs/file-max
152808

#ulimit -aS
.
.
.
.
open files   (-n) 1024
+3  A: 

Use lsof or a debugger to find what files your process has open. Increasing the limit will just delay the point at which you run out of descriptors.

bobmcn
Thank you. I was not closing the /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed file. This was clearly shown by running lsof +p <pid>.
Dhruv