views:

102

answers:

4

The default limit for the max open files on Mac OS X is 256 (ulimit -n) and my application needs about 400 file handlers.

I tried to change the limit with setrlimit() but even if the function executes correctly, i'm still limited to 256.

Here is the test program I use:

#include <stdio.h>
#include <sys/resource.h>

main()
{
  struct rlimit rlp;

  FILE *fp[10000];
  int i;

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  rlp.rlim_cur = 10000;
  setrlimit(RLIMIT_NOFILE, &rlp);

  getrlimit(RLIMIT_NOFILE, &rlp);
  printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max);

  for(i=0;i<10000;i++) {
    fp[i] = fopen("a.out", "r");
    if(fp[i]==0) { printf("failed after %d\n", i); break; }
  }

}

and the output is:

before 256 -1
after 10000 -1
failed after 253

I cannot ask the people who use my application to poke inside a /etc file or something. I need the application to do it by itself.

A: 

I know that's sound a silly question, but you really need 400 files opened at the same time? By the way, are you running this code as root are you?

thesp0nge
yes I need 400 files opened at the same time and no I'm not as root. as man says, since i don't change the max limit, but just the cur limit, i don't have to be root.
acemtp
But wouldn't max limit limit cur limit?
nategoose
+3  A: 

rlp.rlim_cur = 10000;

Two things.

1st. LOL. Apparently you have found a bug in the Mac OS X' stdio. If I fix your program up/add error handling/etc and also replace fopen() with open() syscall, I can easily reach the limit of 10000 (which is 240 fds below my 10.6.3' OPEN_MAX limit 10240)

2nd. RTFM: man setrlimit. Case of max open files has to be treated specifically regarding OPEN_MAX.

Dummy00001
Thx for the answer.Are you serious when you say it could be a bug in stdio on mac os x or it's a joke?so the only solution is to use syscall instead of standard C function?
acemtp
@acemtp: limitation is probably a better word. The standard only requires libc to guarantee that you can open 8 files at a time (including `stdin`/`stdout`/`stderr`!). It would be an unusual limitation but no unheard of.
Evan Teran
@acetemp, @evan: well stdio on Linux has no problems coping with whatever I throw at it. and I personally would qualify that as a bug. 8 files at once?? stdio, stdin, stderr - 3 are busy already. Application log file + trace file - leaves only 3 free... Silly and a bug, if you ask me.
Dummy00001
+2  A: 

This may be a hard limitation of your libc. Some versions of solaris have a similar limitation because they store the fd as an unsigned char in the FILE struct. If this is the case for your libc as well, you may not be able to do what you want.

As far as I know, things like setrlimit only effect how many file you can open with open (fopen is almost certainly implemented in terms on open). So if this limitation is on the libc level, you will need an alternate solution.

Of course you could always not use fopen and instead use the open system call available on just about every variant of unix.

The downside is that you have to use write and read instead of fwrite and fread, which don't do things like buffering (that's all done in your libc, not by the OS itself). So it could end up be a performance bottleneck.

Can you describe the scenario that requires 400 files open ** simultaneously**? I am not saying that there is no case where that is needed. But, if you describe your use case more clearly, then perhaps we can recommend a better solution.

Evan Teran
libc limit: Yes. See my comment. Changing the program to use open() instead of fopen() fixes the problem. On Linux btw works like a charm - after making the obvious fix of replacing the 10000 with rlp.rlim_max (but on Mac OS X even that is different as cap of OPEN_MAX has to be checked too). Scenario where you need 400 fds ... I maintain specialized networks server which also backs data to disk. Seeing 2K sockets and open files in use isn't uncommon.
Dummy00001
@Dummy00001: ok, that is certainly **a** scenario, but having acemtp describe exactly what he is trying to do could still help :-P. But it looks like we have found the nature of the problem.
Evan Teran
@Evan: thanks for clarifying the origin.
Dummy00001
A: 

etresoft found the answer on the apple discussion board:

The whole problem here is your printf() function. When you call printf(), you are initializing internal data structures to a certain size. Then, you call setrlimit() to try to adjust those sizes. That function fails because you have already been using those internal structures with your printf(). If you use two rlimit structures (one for before and one for after), and don't print them until after calling setrlimit, you will find that you can change the limits of the current process even in a command line program. The maximum value is 10240.

acemtp