views:

793

answers:

9

I need a fast, reliable and memory-efficient key--value database for Linux. My keys are about 128 bytes, and the maximum value size can be 128K or 256K. The database subsystem shouldn't use more than about 1 MB of RAM. The total database size is 20G (!), but only a small random fraction of the data is accessed at a time. If necessary, I can move some data blobs out of the database (to regular files), so the size gets down to 2 GB maximum. The database must survive a system crash without any loss in recently unmodified data. I'll have about 100 times more reads than writes. It is a plus if it can use a block device (without a filesystem) as storage. I don't need client-server functionality, just a library. I need Python bindings (but I can implement them if they are not available).

Which solutions should I consider, and which one do you recommend?

Candidates I know of which could work:

  • Tokyo Cabinet (Python bindings are pytc, see also pytc example code, supports hashes and B+trees, transaction log files and more, the size of the bucket array is fixed at database creation time; the writer must close the file to give others a chance; lots of small writes with reopening the file for each of them are very slow; the Tyrant server can help with the lots of small writes; speed comparison between Tokyo Cabinet, Tokyo Tyrant and Berkeley DB)
  • VSDB (safe even on NFS, without locking; what about barriers?; updates are very slow, but not as slow as in cdb; last version in 2003)
  • BerkeleyDB (provides crash recovery; provides transactions; the bsddb Python module provides bindings)
  • Samba's TDB (with transactions and Python bindings, some users experienced corruption, sometimes mmap()s the whole file, the repack operation sometimes doubles the file size, produces mysterious failures if the database is larger than 2G (even on 64-bit systems), cluster implementation (CTDB) also available; file grows too large after lots of modifications; file becomes too slow after lots of hash contention; no built-in way to rebuild the file; very fast parallel updates by locking individual hash buckets)
  • hamsterdb (with Python bindings)
  • C-tree (mature, versatile commercial solution with high performance, has a free edition with reduced functionality)
  • the old TDB (from 2001)
  • various other DBM implementations (such as GDBM, NDBM, QDBM,, Perl's SDBM or Ruby's; probably they don't have proper crash recovery)

I won't use these:

  • MemcacheDB (client-server, uses BereleleyDB as a backend)
  • cdb (needs to regenerate the whole database upon each write)
  • http://www.wildsparx.com/apbcdb/ (ditto)
  • Redis (keeps the whole database in memory)
  • SQLite (it becomes very slow without periodic vacuuming, see autocompletion in the in the location bar in Firefox 3.0; beware: small writing transactions can be very slow; beware: if a busy process is doing many transactions, other processes starve, and they can never get the lock)
  • MongoDB (too heavy-weight, treats values as objects with internal structure)
  • Firebird (SQL-based RDBMS, too heavy-weight)

FYI, a recent article about key--value databases in the Linux magazine.

FYI, an older software list

FYI, a speed comparison of MemcacheDB, Redis and Tokyo Cabinet Tyrant

Related questions on StackOverflow:

A: 

how about a SQLite?

Nick Gorbikoff
I'm afraid SQLite would get too slow after lots of writes (without vacuuming).
pts
What would you think of setting a cron job to the vacuuming at regular intervals? Plus what would happen if the key-value database gets upgraded to use a more familiar relational database, in short, a pita having to migrate. By using SQLite, you're pretty much on safe grounds for the future?
tommieb75
I see you clarified your question - in this case yes sqlite won't be your best solution. Just a recommendation - keep in mind that if you app is cross platform - windows has a hard 2gb limit on file size. The reason I mention this is that it came up a in a different question. If the DB of your choice stores everything in one file, that gets too big - it will crash on windows. Similar to how Thunderbird or Outlook crash when their storage gets too big.
Nick Gorbikoff
A: 

I've used bsddb.hashlib() with Python, it worked pretty good.

Wim
Thanks for mentioning the bsddb Python module. It uses BerkeleyDB.
pts
+1  A: 

You might like djb's cdb, which has the properties you mention.

Jonathan Feinberg
As far as I understand, I have to rebuild the cdb for each write. That would be too slow.
pts
Reading isn't interrupted during write, making this a non-issue in the real world; see http://cr.yp.to/cdb/cdbmake.html . Could you explain why you believe performance would be unacceptable?
esm
esm: did you read the question? The database size is 20 GiB, and there are 100 reads per write, so there are writes. Now, do you consider acceptable to move 20 GiB to-and-fro per write?
ΤΖΩΤΖΙΟΥ
+1  A: 

how about Python 3.0's dbm.ndbm ?

dbm.ndbm uses NDBM, which I've already mentioned in the question. Could you please provide reasons why I should use NDBM?
pts
+2  A: 

I've had good luck with the Tokyo Cabinet/pytc solution. It's very fast (a bit faster than using the shelve module using anydbm in my implementation), both for reading and writing (though I too do far more reading). The problem for me was the spartan documentation on the python bindings, but there's enough example code around to figure out how to do what you need to do. Additionally, tokyo cabinet is quite easy to install (as are the python bindings), doesn't require a server (as you mention) and seems to be actively supported. You can open files in read-only mode, allowing concurrent access, or read/write mode, preventing other processes from accessing the database.

I was looking at various options over the summer, and the advice I got then was this: try out the different options and see what works best for you. It'd be nice if there were simply a "best" option, but everyone is looking for slightly different features and are willing to make different trade-offs. You know best.

(That said, it'd be useful to others if you shared what ended up working the best for you, and why you chose that solution over others!)

Noah
+1  A: 

Another suggestion is TDB (a part of the Samba project). I've used it through the tdb module, however I can't say I've tested its reliability on crashes; the projects I used it in didn't have such requirements, and I can't find relevant documentation.

ΤΖΩΤΖΙΟΥ
A: 

In my query for a cross-platform ISAM-style database (similar), I also received suggestions for the embedded version of Firebird and GLib.

Andy Dent
Does it have a key--value API or only SQL?
pts
+1  A: 

cdb can handle any database up to 4 GB, making it too small for the 20GB matter at hand.

A: 

Riak runs on Linux, and allows you to dynamically add nodes

Zubair