views:

74

answers:

3

I an a graduate student of nuclear physics currently working on a data analysis program. The data consists of billions of multidimensional points.

Anyways I am using space filling curves to map the multiple dimensions to a single dimension and I am using a B+ tree to index the pages of data. Each page will have some constant maximum number of points within it.

As I read the raw data (several hundred gigs) in from the original files and preprocess and index it I need to insert the individual points into pages. Obviously there will be far too many pages to simply store them in memory and then dump them to disk. So my question is this: What is a good strategy for writing the pages to the disk so that there is a minimum of reshuffling of data when a page hits it's maximum size and needs to be split.

Based on the comments let me reduce this a little.

I have a file that will contain ordered records. These records are being inserted into the file and there are too many of these records to simply do this in memory and then write to the file. What strategy should I use to minimize the amount of reshuffling needed when I insert a record.

If this is making any sense at all I would appreciate any solutions to this that you might have.

Edit:
The data are points in multidimensional spaces. Essentially lists of integers. Each of these integers is 2 bytes but each integer also has an additional 2 bytes of meta-data associated with it. So 4 bytes per coordinate and anywhere between 3 and 20 coordinate. So essentially the data consists of billions of chunks each chunk somewhere between 12 and 100 bytes. (obviously points with 4 dimensions will be located in a different file than points with 5 dimensions once they have been extracted).

I am using techniques similar to those discussed in this article: http://www.ddj.com/184410998

Edit 2: I kinda regret asking this question here so consider it officially rescinded; but here is my reason for not using off the shelf products. My data are points that range anywhere from 3 to 22 dimensions. If you think of each point as simply a list you can think of how I want to query the points as what are all the numbers that appeared in the same lists as these numbers. Below are some examples with low dimensionality (and many fewer data points than normal) Example: Data 237, 661, 511, 1021 1047, 661, 237 511, 237, 1021 511, 661, 1047, 1021

Queries:
511
1021
237, 661
1021, 1047
511, 237, 1047

Responses:
237, 661, 1021, 237, 1021, 661, 1047, 1021
237, 661, 511, 511, 237, 511, 661, 1047
511, 1021, 1047
511, 661
_

So that is a difficult little problem for most database programs, though I know of some that exist that can handle this well.

But the problem gets more complex. Not all the coordinates are the same. Many times we just run with gammasphere by itself and so each coordinate represents a gamma ray energy. But at other times we insert neutron detectors into gammasphere or a detector system called microball, or sometimes the nuclides produced in gammasphere are channeled into the fragment mass analyzer, all those and more detector systems can beused singly or in any combination with gammasphere. Unfortunately we almost always want to be able to select on this additional data in a manner similar to that described above. So now coordinates can have different meanings, if one just has microball in addition to gammasphere you make make up an n dimensional event in as many ways as there are positive solutions to the equation x + y = n. Additionally each coordinate has metadata associated with it. so each of the numbers I showed would have at least 2 additional numbers associated with them, the first, a detector number, for the detector that picked up the event, the second, an effeciency value, to describe how many times that particular gamma ray counts for (since the percentage of gamma rays entering the detector that are actually detected, varies with teh detector and with the energy).

I sincerely doubt that any off the shelf database solution can do all these things and perform well at the same time without an enourmous amount of customization. I believe that the time spent on that is better spent on writing my own, much less general, solution. Because of the loss of generality I do not need to implement a delete function for any of the databasing code, I do not need to build secondary indices to gate on different types of coordinates (just one set, effectively counting each point only once), etc.

A: 

So the first aspect is to do this in a threaded application to get through it quicker. Break your chunks of data into workable sections. Which leads me to think...

I was going to initially suggest that you use Lucene...but in thinking of that this really sounds like something you should process with Hadoop. It was made for this sort of work (assuming you have the infrastructure for it).

I most certainly wouldn't do this in a database.

When you are speaking of indexing data and filling documents with data points...and you don't have the infrstructure, know how, or time to implement hadoop, you should then revert back to my original thought and use Lucene. You can actually index your data that way and store your datapoints directly into an index (by numeric range I would think) with a "document" (object) structure as you think is best.

Andrew Siemer
! "I most certainly wouldn't do this in a database" - That is so Wrong! Databases are designed from the ground up to be efficient at searching and paging memory.
Mitch Wheat
The key here is that you are processing through your initial set of data? As you get to bits of data you want to save into your index you write to your Lucene index and toss that data (now out of memory). But it is written to locally not over the wire and indexed on the fly very quickly which will make it super fast to get to when you need to add something else to that set...or when you need to split a document. No working with the data in memory...but also not connecting to and writing to a db every few miliseconds!
Andrew Siemer
+1  A: 

I believe you should first look at what commercial and free databases have to offer. They are designed to perform fast range searches (given the right indexes) and efficiently manage memory and reading/writing pages to disk.

Failing that, have a look at one of the variants of Binary Space Partition (BSP) trees.

Mitch Wheat
+1  A: 

I have come up with an answer myself. As events are inserted into pages when a page needs to split a new page is made at the end of the file. Half of the events of the original page are moved to that page. This leaves the pages unsorted which somewhat defeats the fast retrieval mechanisms.

However, since I only write to the db in one big initial rush (lasting several days probably) I can justify spending a little extra time after the writing to go through the pages and sort them after they have all been built. This part is quite easy in fact because of the nature of the B+ trees used to index the pages. I simply start in the leftmost leaf node of the B+tree and read the first page and put it first in a final file, then I read the second page and put it second, so on and so forth.

In this manner at the end of the insert all the pages will be sorted within their files, allowing the methods I am using to map multidimensional requests to single dimensional indexes to work efficiently and quickly when reading the data from disk.

James Matta