I an a graduate student of nuclear physics currently working on a data analysis program. The data consists of billions of multidimensional points.
Anyways I am using space filling curves to map the multiple dimensions to a single dimension and I am using a B+ tree to index the pages of data. Each page will have some constant maximum number of points within it.
As I read the raw data (several hundred gigs) in from the original files and preprocess and index it I need to insert the individual points into pages. Obviously there will be far too many pages to simply store them in memory and then dump them to disk. So my question is this: What is a good strategy for writing the pages to the disk so that there is a minimum of reshuffling of data when a page hits it's maximum size and needs to be split.
Based on the comments let me reduce this a little.
I have a file that will contain ordered records. These records are being inserted into the file and there are too many of these records to simply do this in memory and then write to the file. What strategy should I use to minimize the amount of reshuffling needed when I insert a record.
If this is making any sense at all I would appreciate any solutions to this that you might have.
Edit:
The data are points in multidimensional spaces. Essentially lists of integers. Each of these integers is 2 bytes but each integer also has an additional 2 bytes of meta-data associated with it. So 4 bytes per coordinate and anywhere between 3 and 20 coordinate. So essentially the data consists of billions of chunks each chunk somewhere between 12 and 100 bytes. (obviously points with 4 dimensions will be located in a different file than points with 5 dimensions once they have been extracted).
I am using techniques similar to those discussed in this article: http://www.ddj.com/184410998
Edit 2: I kinda regret asking this question here so consider it officially rescinded; but here is my reason for not using off the shelf products. My data are points that range anywhere from 3 to 22 dimensions. If you think of each point as simply a list you can think of how I want to query the points as what are all the numbers that appeared in the same lists as these numbers. Below are some examples with low dimensionality (and many fewer data points than normal) Example: Data 237, 661, 511, 1021 1047, 661, 237 511, 237, 1021 511, 661, 1047, 1021
Queries:
511
1021
237, 661
1021, 1047
511, 237, 1047
Responses:
237, 661, 1021, 237, 1021, 661, 1047, 1021
237, 661, 511, 511, 237, 511, 661, 1047
511, 1021, 1047
511, 661
_
So that is a difficult little problem for most database programs, though I know of some that exist that can handle this well.
But the problem gets more complex. Not all the coordinates are the same. Many times we just run with gammasphere by itself and so each coordinate represents a gamma ray energy. But at other times we insert neutron detectors into gammasphere or a detector system called microball, or sometimes the nuclides produced in gammasphere are channeled into the fragment mass analyzer, all those and more detector systems can beused singly or in any combination with gammasphere. Unfortunately we almost always want to be able to select on this additional data in a manner similar to that described above. So now coordinates can have different meanings, if one just has microball in addition to gammasphere you make make up an n dimensional event in as many ways as there are positive solutions to the equation x + y = n. Additionally each coordinate has metadata associated with it. so each of the numbers I showed would have at least 2 additional numbers associated with them, the first, a detector number, for the detector that picked up the event, the second, an effeciency value, to describe how many times that particular gamma ray counts for (since the percentage of gamma rays entering the detector that are actually detected, varies with teh detector and with the energy).
I sincerely doubt that any off the shelf database solution can do all these things and perform well at the same time without an enourmous amount of customization. I believe that the time spent on that is better spent on writing my own, much less general, solution. Because of the loss of generality I do not need to implement a delete function for any of the databasing code, I do not need to build secondary indices to gate on different types of coordinates (just one set, effectively counting each point only once), etc.