At least 32-bit Solaris has a limit of 256 file pointers because the structure stores the file descriptor in an unsigned char field. This is retained for backwards compatibility with some almost impossibly old versions of SunOS. Other platforms - I'm tempted to say most other platforms - do not share that limitation. On the other hand, it is relatively unusual for an ordinary user program to need that many files open concurrently; it more often indicates a bug (not closing the files when finished with them) than not. Having said that, though, it can be a problem for things like database servers which need to have lots of data files open at the same time.
One comment says:
That's almost it. We don't have a large number of files open, but the server handles a large number of connections from clients. Socket handles and file descriptors seem to come from the same place. When we have a lot of connections, 'fopen' fails because the system-level call returns and fd > 255.
'Socket handles' are file descriptors at the system call level, so they come from the same place as regular file descriptors for files.
If you have to work around this, then you need to wrap your current socket opening code so that if it gets an file descriptor in the range 0..255, then it calls 'dup2()
' to create a file descriptor in the range that stdio won't use - and then close the original file descriptor. The only snag with this is that you have to keep track of which file descriptors are available, because dup2
will merrily close the target file descriptor if it is currently open.
Of course, I'm assuming your socket code reads file descriptors and not file pointers. If that's the case, you have bigger problems - too many things want to use the same resources and they can't all use them at the same time.