views:

299

answers:

4

Background

I have many (thousands!) of data files with a standard field based format (think tab-delimited, same fields in every line, in every file). I'm debating various ways of making this data available / searchable. (Some options include RDBMS, NoSQL stuff, using the grep/awk and friends, etc.).

Proposal

In particular, one idea that appeals to me is "indexing" the files in some way. Since these files are read-only (and static), I was imagining some persistent files containing binary trees (one for each indexed field, just like in other data stores). I'm open to ideas about how to this, or to hearing that this is simply insane. Mostly, my favorite search engine hasn't yielded me any pre-rolled solutions for this.

I realize this is a little ill-formed, and solutions are welcome.

Additional Details

  • files long, not wide
    • millions of lines per hour, spread over 100 files per hour
    • tab seperated, not many columns (~10)
    • fields are short (say < 50 chars per field)
  • queries are on fields, combinations of fields, and can be historical

Drawbacks to various solutions:

(All of these are based on my observations and tests, but I'm open to correction)

BDB

  • has problems with scaling to large file sizes (in my experience, once they're 2GB or so, performance can be terrible)
  • single writer (if it's possible to get around this, I want to see code!)
  • hard to do multiple indexing, that is, indexing on different fields at once (sure you can do this by copying the data over and over).
  • since it only stores strings, there is a serialize / deserialize step

RDBMSes

Wins:

  • flat table model is excellent for querying, indexing

Losses:

  • In my experience, the problem comes with indexing. From what I've seen (and please correct me if I am wrong), the issue with rdbmses I know (sqlite, postgres) supporting either batch load (then indexing is slow at the end), or row by row loading (which is low). Maybe I need more performance tuning.
+1  A: 

The physical storage access time will tend to dominate anything you do. When you profile, you'll find that the read() is where you spend most of your time.

To reduce the time spent waiting for I/O, your best bet is compression.

Create a huge ZIP archive of all of your files. One open, fewer reads. You'll spend more CPU time. I/O time, however, will dominate your processing, so reduce I/O time by zipping everything.

S.Lott
I disagree. We don't know how the dataset size, we don't know the memory size, we don't know access patterns, etc.
Bandi-T
@Bandi-T: Been there, done that. Compression yields dramatic improvements for all use cases except the "all we do is insert and never read the file again" scenario. For the historical archive (write-once-read-never) use case, there's no performance optimization necessary.
S.Lott
@S.Lott: I still disagree. Compression might reduce your datasize (your I/O load) 1:10 - proper indexing might reduce it 1:1000.
Bandi-T
@Bandi-T: That depends on the selectivity of the index. An index that isn't very selective ('Male' vs. 'Female') may only reduce the volume of rows by about 1/2. Index cleverness can be undone by query dumbness. Compression will often do more for text files and cannot be undone by a dumb query.
S.Lott
@S.Lott: exactly. do you know whether the OP has text files? do you know, how selective his column values are? which leads us back to the first question (to OP) of "What do you have and what do you want to do?"
Bandi-T
@Bandi-T. The question sure sounds like text files. Since I don't know how selective the column indexes could be, I'm suggesting something that always works for text files: compression. If we had more data from the OP, then, perhaps we could suggest something smarter with better performance improvements. Lacking details, I suggested something that always works.
S.Lott
Gregg Lind
+4  A: 

Why reinvent the wheel? By all means, index the files, but use Whoosh, or Lucene, etc.

Edit: you hadn't stated your performance requirements at the time I posted this answer. You're not going to be able to index "millions of rows per hour" with off-the-shelf software.

Jonathan Feinberg
I did not see any indication that the OP was looking for full-text search.
Bandi-T
My answer refers the OP to two index frameworks that are excellent for field-oriented and faceted search, having no particular bias toward unstructured text.
Jonathan Feinberg
I'm not at all looking for full text search (and for that I tend prefer Sphinx :)
Gregg Lind
Which is why my answer points to two frameworks that are excellently suitable for your purposes.
Jonathan Feinberg
Whoosh doesn't advertise itself as tabular data indexer, but it does seem to work that way. I wonder what advantages it would have over something like MongoDB. I'll do some testing!
Gregg Lind
I realize he isn't looking for a full-text indexer, but Lucandra can easily index "millions of rows per hour" on even modest hardware. Indexing simple structured data with an informed-indexer is usually a lot faster than full text indexing, so his performance goals should be relatively easy to meet as long as he stays away from relational databases.
Nick Bastin
+1  A: 

If the data is already organized in fields, it doesn't sound like a text searching/indexing problem. It sounds like tabular data that would be well-served by a database.

Script the file data into a database, index as you see fit, and query the data in any complex way the database supports.

That is unless you're looking for a cool learning project. Then, by all means, come up with an interesting file indexing scheme.

Corbin March
I does indeed sound like tabular data that is well served by a database. In my experience RDBMSs have been too slow to load the data. I will investigate more though.
Gregg Lind
+1  A: 

There are a number of file-based DBMSes designed for the sort of data you describe. One of the most widely used is Berkeley DB. It's open source, now owned by Oracle, and comes in C and Java flavors (with Python and other language wrappers for the C version).

Since your data is read-only, though, there are other options. One that comes to mind is CDB. It likewise has Python interfaces, as well as a pure Python reimplementation.

Or, if you really like implementing things yourself, you could always implement SSTables from Google's Bigtable paper. ;)

Nick Johnson
CDB is news to me! After investigating, it looks like a neat tool, that won't really help me here, since the python bindings seem to require loading the whole thing.
Gregg Lind
Only the pure-python version does that. The C wrapper uses the regular interface, which works direct from disk.
Nick Johnson