views:

1256

answers:

9

Alright. So I have a very large amount of binary data (let's say, 10GB) distributed over a bunch of files (let's say, 5000) of varying lengths.

I am writing a Java application to process this data, and I wish to institute a good design for the data access. Typically what will happen is such:

  • One way or another, all the data will be read during the course of processing.
  • Each file is (typically) read sequentially, requiring only a few kilobytes at a time. However, it is often necessary to have, say, the first few kilobytes of each file simultaneously, or the middle few kilobytes of each file simultaneously, etc.
  • There are times when the application will want random access to a byte or two here and there.

Currently I am using the RandomAccessFile class to read into byte buffers (and ByteBuffers). My ultimate goal is to encapsulate the data access into some class such that it is fast and I never have to worry about it again. The basic functionality is that I will be asking it to read frames of data from specified files, and I wish to minimize the I/O operations given the considerations above.

Examples for typical access:

  • Give me the first 10 kilobytes of all my files!
  • Give me byte 0 through 999 of file F, then give me byte 1 through 1000, then give me 2 through 1001, etc, etc, ...
  • Give me a megabyte of data from file F starting at such and such byte!

Any suggestions for a good design?

+1  A: 

Wow. You are basically implementing a database from scratch. Is there any possibility of importing the data into an actual RDBMS and just using SQL?

If you do it yourself you will eventually want to implement some sort of caching mechanism, so the data you need comes out of RAM if it is there, and you are reading and writing the files in a lower layer.

Of course, this also entails a lot of complex transactional logic to make sure your data stays consistent.

Eric Z Beard
+8  A: 

Use Java NIO and MappedByteBuffers, and treat your files as a list of byte arrays. Then, let the OS worry about the details of caching, read, flushing etc.

Will Hartung
A: 

@Eric

But my queries are going to be much, much simpler than anything I can do with SQL. And wouldn't a database access be much more expensive than a binary data read?

Jake
I think one of the lite-weght in-process databases would work great, if the data fits a relational model (it might not). I don't want to discourage you, it sounds like a fun project, but if you want to avoid a lot of coding, you might want to go with an established codebase.
Eric Z Beard
A: 

This is to answer the part about minimizing I/O traffic. On the Java side, all you can really do is wrap your readers in BufferedReaders. Aside from that, your operating system will handle other optimizations like keeping recently-read data in the page cache and doing read-ahead on files to speed up sequential reads. There's no point in doing additional buffering in Java (although you'll still need a byte buffer to return the data to the client).

sk
You can't rely on the operating system to do read-ahead properly. Linux does it reasonably well, but Windows is useless. To properly max out your hard-drive IO bandwidth under Windows, you'll need about 1MB of buffering.
Noel Grandin
+1  A: 
erickson
+2  A: 

@Will

Pretty good results. Reading a large binary file quick comparison:

  • Test 1 - Basic sequential read with RandomAccessFile. 2656 ms

  • Test 2 - Basic sequential read with buffering. 47 ms

  • Test 3 - Basic sequential read with MappedByteBuffers and further frame buffering optimization. 16 ms

Jake
+1  A: 

You may want to take a look at an open source, simple object database called jdbm - it has a lot of this kind of thing developed, including ACID capabilities.

I've done a number of contributions to the project, and it would be worth a review of the source code if nothing else to see how we solved many of the same problems you might be working on.

Now, if your data files are not under your control (i.e. you are parsing text files generated by someone else, etc...) then the page-structured type of storage that jdbm uses may not be appropriate for you - but if all of these files are files that you are creating and working with, it may be worth a look.

Kevin Day
A: 

I had someone recommend hadoop (http://hadoop.apache.org) to me just the other day. It looks like it could be pretty nice, and might have some marketplace traction.

Nathan Feger
A: 

I would step back and ask yourself why you are using files as your system of record, and what gains that gives you over using a database. A database certainly gives you the ability to structure your data. Given the SQL standard, it might be more maintainable in the long run.

On the other hand, your file data may not be structured so easily within the constraints of a database. The largest search company in the world :) doesn't use a database for their business processing. See here and here.

Alan