tags:

views:

320

answers:

3

I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file).

Now that it has been computed once, that 800MB of data is read only.

I cannot hold it in memory.

As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help.

I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed.

What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed).

Is Java NIO the way to go?

I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory.

All I want is the fastest random reads I can get to access these 800MB of disk-based data.

btw in case people wonder this is not at all the same as the question I asked not long ago:

http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

+2  A: 

RandomAccessFile (blocking) may help: http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html

You can also use FileChannel.map() to map a region of file to memory, then read the MappedByteBuffer.

See also: http://java.sun.com/docs/books/tutorial/essential/io/rafs.html

Konrad Garus
@Konrad Garus: ok but that doesn't really help me much :( What I'd like to know is what is the fastest way to random reads in a 800MB read-only file (possibly from multiple threads).
cocotwo
Offhand I think that nio (last link) and RandomAccessFile have similar performance, but use different APIs. nio API is a bit more complex, but it can be non-blocking.Both would require a synchronized wrapper for thread safety.
Konrad Garus
A: 

Actually 800 MB isn't very big. If you have 2 GB of memory or more, it can reside in disk cache if not in your application itself.

Peter Lawrey
+1  A: 

800MB is not that much to load up and store in memory. If you can afford to have multicore machines ripping away at a data set for days on end, you can afford an extra GB or two of RAM, no?

That said, read up on Java's java.nio.MappedByteBuffer. It is clear from your comment "I think I don't want to map the 800 MB in memory" that the concept is not clear.

In a nut shell, a mapped byte buffer allows one to programmatically access the data as it were in memory, although it may be on disk or in memory--this is for the OS to deiced, as Java's MBB is based on the OS's Virtual Memory subsystem. It is also nice and fast. You will also be able to access a single MBB from multiple threads safely.

Here are the steps I recommend you take:

  1. Instance a MappedByteBuffer that mapps your data file to the MBB. The creation is kinda of expensive, so keep it around.
  2. In your look up method...
    1. instance a byte[4] array
    2. call .get(byte[] dst, int offset, int length)
    3. the byte array will now have your data, which you can turn into a value

And presto! You have your data!

I'm a big fan of MBBs and have used the successfully for such tasks in the past.

Stu Thompson
Be sure to map the file as read only - you don't want to be making accidental modifications to it (this is far more of an issue in native code, obviously.)
Daniel Earwicker
Agreed. And I am not sure about the concurrency ramifications of having it read/write.
Stu Thompson