tags:

views:

26

answers:

4

Hi,

in our application we read static resources (i.e. javascript files) from a jar file located in WEB-INF/lib. It happens that the server quits working with too many open files exception. I found out (using lsof), that the jar file is opened several times. And the number increases when I reload a page by the number of javascript files of the page. I tried a couple the following things without positive result:

  • URLConnection setDefaultUSeCache(false)
  • URLConnection setUSeCache(false)
  • context.xml cachingAllowed="false"

Is there something else I could try?

A: 

A little light on detail, but it sounds like you're loading the resources with a Stream, so chances are your code is creating also creates the object anonymously in a method call (a very common practice in examples I've seen) and I know that causes file locking issues on Windows, so I'm sure it keeps descriptors hanging about on Unixes. Make sure you have a try block and you can call close() in the finally block when you're done. Static code analysis tools will usually catch this condition.

You don't mention the number of file handles we're talking about however. Usually, network connections will be your culprit, and usually you want to increase the limit.

Danny Thomas
A: 

In Tomcat server, each incoming request uses a TCP socket and this socket consumes one file descriptor from the total available for the process. The file fescriptor (FD) is a handle created by a process when the file is opened. Each process can use a set limit of FDs and this is usually an OS level setting.

If you have many JS script files being loaded per page, then each JS request will consume one FD while it is being processed.

As the number of requests coming into the server increase, you can face a situation where there are many sockets open and thus you run out of FDs and you get the "Too Many Open Files" error.

Check the value of # cat /proc/sys/fs/file-max to see how many FDs can be opened by your Tomcat on the server.

Ideally should be 65535. See here on how to increase this limit

http://tech-torch.blogspot.com/2009/07/linux-ubuntu-tomcat-too-many-open-files.html

Another suggestion is if you can reduce the number of JS calls, by combining the JS files into one.

JoseK
It seems to me that the lsof output shows regular files in my case. The user that runs the tomcat can open 2048 files per process.
Michael
A: 

The inputstream is closed in the finally block as you suggest. I looked also at URLConnection, but it seems that there is no close() or disconnect method.

It seems to me that the files were claused after a certain period of time. The open files are listed by lsof and if I reload the page open file handles go up. But after a couple of minutes they go down again. In case of high user traffic the open file handle were greater than the max of 2048 per process already. So the freeing of open file handles is to late then.

Michael
Hmm, now that sure sounds like objects holding the descriptors being garbage collected. I'd still be looking for anonymous I/O related objects wrapped in others method calls. Try Findbugs, it found the cause of similar issues in a project I worked on. http://findbugs.sourceforge.net/
Danny Thomas
I think now that it is due to getLastModified() calls which are not closing the InputStream in a final block. The getLastModified() ist called for caching purposes and this also opens the file somehow, even if the inputstream is not used afterwards. In our code the stream is closed only after reading.
Michael
A: 

If wrote a tiny test program that opens a URLconnection to a file. It only calls getLastModified() and this opens a file handle already.

Afterwards I close the input stream and this effect dissappears.

So I come to the conclusion that I have to close the URLConnection.inputStream even if the stream is not read after the connection.

Michael