views:

211

answers:

2

I am looking to store 2D arrays of 900x100 elements in a database. Efficient recall and comparison of the arrays is important. I could use a table with a schema like [A, x, y, A(x,y)] such that a single array would compromise 90,000 records. This seems like an ~ok~ table design to store the array, and would provide for efficient recall of single elements, but inefficient recall of a whole array and would make for very inefficient array comparisons.

Should I leave the table design this way and build and compare my arrays in code? Or is there a better way to structure the table such that I can get efficient array comparisons using database only operations?

thanks

A: 

900 x 100 elements is actually very small (even if the elements are massive 1K things that'd only be 90 MB). Can't you just compare in memory when needed and store on disk in some serialized format?

It doesn't make sense to store 2D arrays in the database, especially if it is immutable data.

JeffFoster
I am going to push the client to move in this direction, (with hashes of the data files stored in a protected table in the database to ensure data integrity) but it is not a direct answer to the question, but an alternative solution.
LokiPatera
+1  A: 

If the type of data allows, store it in a concatenated format and compare in memory after it has been de-concatenated. The database operation will be much faster and the in-memory operations will be faster than database retrievals as well.

Who knows, you may even be able to compare it without de-concatenating.

mm2010
Storing the arrays as serialized binary objects (OLE or BLOB) seems like the best method of storing the arrays inside the database itself.
LokiPatera