Background
I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).
Proposal
In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.
I realize this is a little ill-formed, and solutions are welcome.
Additional Details
- files long, not wide
- millions of lines per hour, spread over 100 files per hour
- tab seperated, not many columns (~10)
- fields are short (say < 50 chars per field)
- queries are on fields, combinations of fields, and can be historical
Drawbacks to various solutions:
(All of these are based on my observations and tests, but I'm open to correction)
BDB
- has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
- single writer (if it's possible to get around this, I want to see code!)
- hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
- since it only stores strings, there is a serialize / deserialize step
RDBMSes
Wins:
- flat table model is excellent for querying, indexing
Losses:
- In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.