tags:

views:

656

answers:

7

Hi, My code opens more than 256 file handles, So When I run it on solaris machines I am ending up in "Exceeding file handlers limit" error.

I have two questions regarding this

1) Is this limit is only for 32 bit softwares or 64 bit softwares also suffer from it. I googled about it and came to know that 64 bit softwares do not have this limit.(http://developers.sun.com/solaris/articles/stdio_256.html) But I built 64 bit static object and when i use this it is giving the error. What actually 64 bit software means?

2) As given in the above link I used ulimit to increase file handlers limit (in run time, I mean just before running the command), exported extendedFile library and I am not got getting any error.What we have to do incase of Linux?

Thanks D. L. Kumar

+1  A: 

To check if an object file (executable) is 64-bit, use the file command (at least on Linux).

For example:

$ file `which ls`
/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), stripped

$ file my-32bit-exe
my-32bit-exe: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), corrupted section header size

(Don't mind the "corrupted section header size" -- exe was manually mangled to reduce filesize).

ulimit can be used on Linux (see ulimit(1) and ulimit(3)).

strager
+3  A: 

I've encountered this before. As far as I can tell, it is actually a bug in solaris's libc where they use an 8-bit unsigned integer type to store the fd in the FILE struct. Apparently they didn't change it very quickly in the name of backwards compatibility (in case a program for some reason was dependent on the implementation details of the FILE struct). This should NOT be an issue on Linux or any other non-solaris *nix. The article you cited suggested reasonable workarounds, so you should use those.

As for "what is a 64-bit executable", well it's just a binary which has been compiled for a 64-bit instruction set. Some architectures support both some don't. (For example x86-64 OSes typically allow for 32-bit processes for backwards compatibility).

Evan Teran
+1  A: 

On Solaris, you build 64-bit programs using either:

cc -xarch=v9 ...

Or:

gcc -m64 ...

As Evan said, the fundamental problem for 32-bit Solaris is backwards binary compatibility and an 8-bit integer used to hold the fd.

I just tried this code on Solaris 10 for SPARC:

#include <stdio.h>

int main(void)
{
    size_t i;
    for (i = 0; i < 300; i++)
    {
        FILE *fp = fopen("/dev/null", "a");
        if (fp == 0)
        {
            printf("Failed on %zu\n", i);
            return(1);
        }
    }
    printf("Succeeded to %zu\n", i);
    return(0);
}

Compiled as:

cc -xarch=v9 -o xxx xxx.c

And it gave me 'failed on 253'. (It's test code: I know it throws away 252 pointers.) This supports your contention that a simple 64-bit build. However, there's another factor at play - the resource limits.

$ ulimit -n
256
$

So, increasing the default limit with:

$ ulimit -n 400
$ ulimit -n
400
$ ./xxx
Succeeded to 300
$

Try that...

Jonathan Leffler
A: 

Like Evan Teran mentioned, solaris libc has this "odd" limitation on FILE that it can only handle file handles under 256.

This is regardless the limit you can set with ulimit. You can set this limit from withing your program with:

#include <sys/resource.h>

struct rlimit rl;
getrlimit(RLIMIT_NOFILE,&rl);
rl.rlim_cur = 1024; /* change it to 1024 - note has to be < than rl.rlim_max */
setrlimit(RLIMIT_NOFILE,&rl);

Now, I would also stop using FILE* and use open instead of fopen, etc etc. For the cases you really, really need to use FILE*, on several projects I worked, at the start of the program several file descriptors were "reserved" by doing a socket call, and we had a small library to get a FILE* using these, by closing one of the sockets and right after that doing a fopen, which will use the just closed fd. Of course one would also need to close the FILE* with a special function that would fclose and then get the fd right away using socket ;-)

njsf
A: 

Thanks lot, everybody. I will try these things now.

Hi njsf, fopen is giving some problems when it tries to open very large files, So I started using FILE*

(btw, This forum is very good and helpful. Looking forward to contribute). Thanks D. L. Kumar

A: 

Hi, It seems to me that only ulimit is the solution and opening file with F flag. njsf, I tried using your code but it did not work. Do I need to do any thing more.

Thanks D. L. Kumar

A: 

Hi, Finally I got the solution. I made two changes in my code to make it work

1) as suggested above by njsf

2) Opening file with "F" flag, as follows FILE *fp = fopen("/dev/null", "wF");

Thanks a lot. D. L. Kumar