tags:

views:

373

answers:

8

Hi,

I'm working on a java web application that uses thousands of small files to build artifacts in response to requests. I think our system could see performance improvements if we could map these files into memory rather than run all over the disk to find them all the time.

I have heard of mmap in linux, and my basic understanding of that concept is that when a file is read from disk the file's contents get cached somewhere in memory for quicker subsequent access. What I have in mind is similar to that idea, except I'd like to read the whole mmap-able set of files into memory as my web app is initializing for minimal request-time responses.

One aspect of my thought-train here is that we'd probably get the files into jvm memory faster if they were all tarred up and somehow mounted in the JVM as a virtual file system. As it stands it can take several minutes for our current implementation to walk through the set of source files and just figure out what all is on the disk.. this is because we're essentially doing file stats for upwards of 300,000 files.

I have found the apache VFS project which can read information from a tar file, but I'm not sure from their documentation if you can specify something such as "also, read the entire tar into memory and hold it there..".

We're talking about a multithreaded environment here serving artifacts that usually piece together about 100 different files out of a complete set of 300,000+ source files to make one response. So whatever the virtual file system solution is, it needs to be thread safe and performant. We're only talking about reading files here, no writes.

Also, we're running a 64 bit OS with 32 gig of RAM, our 300,000 files take up about 1.5 to 2.5 gigs of space. We can surely read a 2.5 gigabyte file into memory much quicker than 300K small several-kilobyte-sized files.

Thanks for input!

  • Jason
A: 

Just to clarify, mmap() in Unix-like systems would not allow you to access files as such; it simply makes the contents of a file available in memory, as memory. You cannot use open() to further open any contained files. There is no such thing as a "mmap()able set of files".

Can't you just add a pass that loads all your "templates" initially, and then quickly finds them based on something simple, like a hash on the name of each? That should let you leverage your memory, and get down to O(1) access for any template.

unwind
Also node that mmap() does not push the content in memory, it gives you a virtual memory address where you can get the content. The first time, the file will be fetched from disk. (Then it may be left in memory if there's enough available.)
mat
A: 

I think you're still thinking into the old memory/disk mode.

mmap won't help here because that old memory/disk thing is long gone. If you mmap a file, the kernel will give you back a pointer to some virtual memory for you to use at your own discretion, it will not load the file into real memory at once, it will do so when you'll ask for a part of the file, and it will load only the page(s) you're requesting. (That is, a memory page, something usually around 4KB.)

you say those 300k files, take about 1.5GB to 2.5GB of disk space. If there's any chance you can throw 2 (or better, 4) more gigabyte of RAM into your server, you would be very better with leaving that disk reading thing to the OS, if it has enough RAM to load files in some disk cache, it will, and from them, any read() on them, won't even hit the disk. (It will, to store atime in the inode if you've not mounted your volume with noatime.)

If you try to read() files, get them into memory, and serve them from there, you have now way to know for sure that they'll always be in RAM and not in the swap because the OS had other things to do with that part of the memory you've not used for a few time.

If you have enough RAM to let the OS do disk caching, and you really want the files to get loaded, you could always do a little script/program that will go through your hierarchy and read all the files. (Without doing anything else.) It will get the OS to load them from disk to a memory disk cache, but you have no way of knowing they'll stay there if the OS needs the memory. Thus what I said before, you should let the OS deal with that and give it enough RAM to do so.

You should read varnish's Architect Notes where phk tells you in his own words, why what you're trying to achieve is much better left of to the OS, which will always, ever, know better the JVM what's in RAM and what is not.

mat
A: 

If you need fast access to all those files, you could load them into memory, but I would not load them as files. I would put those data in some kind of an object structure (in the simplest form, just a String).

What I would do, is create a service that return the file as an object structure from whatever parameter you re using. Then implement some caching mechanism around this service. Then it's all a matter of tuning the cache. If you really need to load everything in memory, configure your cache to use more memory. If some files are used much more than others, it might be sufficient to cache just those ...

We could probably give you a better response if we knew more about what you are trying to achieve.

Guillaume
Loading them into *memory* won't garanty you that the operating system won't put them into the swap.
mat
+1  A: 

You can try to put all the files in a JAR and put that on the classpath. Java uses some built-in tricks to make reading from a JAR file very fast. That will also keep the directory of all files in RAM, so you don't have to access the disk to find a file (that happens before you can start loading it).

The JVM won't load the whole JAR into RAM at once and you probably don't want that anyway because your machine would start swapping. But it will be able to find the pieces very quickly because it will keep the file open the whole time and therefore, you won't loose any time opening/closing the file resource.

Also, since you're using this single file all the time, chances are that the OS will keep it longer in the file caches.

Lastly, you can try to compress the JAR. While this may sound like a bad idea, you should give it a try. If the small files compress very well, the time to unpack with current CPUs is much lower than the time to read the data from the disk. If you don't have to keep the intermediate data anywhere, you can stream the uncompressed data to the client without needing to write to a file (which would ruin the whole idea). The drawback of this is that it does eat CPU cycles and if your CPU is busy (just check with some load tool; if it's above 20%, then you loose), then you will make the whole process slower.

That said, when you're using the HTTP protocol, you can tell the client that you're sending compressed data! This way, you don't have to unpack the data and you can load very small files.

Main disadvantage of the JAR solution: You can't replace the JAR as long as the server is running. So replacing a file means you will have to restart the server.

Aaron Digulla
First, even if there is memory available, some pages of your JAR may end up in the swap because they've not been used for a long time.I still think it's a bad idea to try to be more clever than the kernel as you have no idea what is really in RAM :-)
mat
Since he's using the JAR all the time, chances of it being swapped out are low. Also, reading from the swap is much faster than reading from the normal filesystem.
Aaron Digulla
Well, reading from the swap is almost as bad, because the CPU tries to access a swapped page, gets a page fault, does other things while the page is getting fetched from the swap, whereas reading a file from the fs and sending it doesn't get you a page fault, and let's the OS cache what it can.
mat
mat: Do you have any figures about the performance of fs cache vs. a swap partition? I still lean towards the swap: Less code, leaner API, optimized to hell, simple mapping between RAM and location on disk.
Aaron Digulla
A: 

Put the files on 10 different servers and instead of directly serving the requests, send the client HTTP redirects (or an equivalent) with the URL where they can find the file they want. This allows to spread the load. The server just responds to quick requests and the (large) downloads are spread over several machines.

Aaron Digulla
A: 

If you are on Linux I would give the good old RAM disk a try. You can stick with the current way of doing things and just drastically reduce IO costs. You are not bound to the JVM memory and can still easily replace the content.

As you were talking about VFS: that also has a RAM disk provider but I would still try the native RAM disk approach first.

tcurdt
A: 

What you need is to load all the information in a HashTable.

Load every file using it's name as the key, and the contents as the value, a you'll be able to work orders of magnitude faster and easier than the setup you have in mind.

Saiyine
+1  A: 

If you have 300,000 files that you need to access quickly you could use a database, not a relational one but a simple key-value one, like http://www.space4j.org/. This won't help your startup time, but is possibly quite a speed up during runtime.

Simon Groenewolt