A: 

Just Googling around, I've found a couple discussions on the topic for different distributions. Perhaps one of them will point you in the right direction:

How to increase file descriptors max limit on Linux

File Descriptors vs Linux Performance

Don Wakefield
+4  A: 

Please see the C10K problem page. It contains an in-depth discussion on how to achieve the '10000 simultaneous connections' goal, while maintaining high-performance and managing to serve each client.

It also contains information on how to increase the performance of your kernel when handling a large number of connections at once.

ASk
+2  A: 

Thanks for all your answers but I think I've found the culprit. After redefining __FD_SETSIZE in my program everything started to move a lot faster. Of course ulimit also needs to be raised, but without __FD_SETSIZE my program never takes advantage of it.

Andrioid
Using an FD_SET with fd's beyond __FD_SETSIZE causes data that happens to be after the FD_SET to be overwritten, which can cause plenty of hard-to-debug grief. I am a little curious why you are using an FD_SET with epoll (it would make sense for select() or poll()...)
Lance Richardson
Because it made a difference. Using httperf to bomb my server it stalled at 1000/s without my FD_SETSIZE changes, stalls at 5000/s at the moment with httperf complaining about fd-unavailable.What exactly happens in the program when I get too many connections is that the epoll listen file descriptor just stops getting events. Nothing locks down, my event loop still runs. Which points me towards the operating systems or any underlying libraries causing the limit.
Andrioid
It's not necessaryly epoll that is not receiving the file descriptors fast enough. It is quite possible that httperf (that uses select) is causing this limitation, I'll update my answer when I know more.
Andrioid