views:

75

answers:

1

I need to load over 1 billion keys into Berkley DB and therefore I want to tune it in advance to get better performance. With standard configuration it takes me now about 15min to load 1'000'000 keys which is too slow. Is there a proper way to tune for example the B+Tree of Berkley DB (node size etc...)?

(As an comparision, after tuning tokyo cabinet, it loads 1 billion keys in 25min).

P.S. I'm looking for tuning tips as a code and not parameters to set for a running system (like jvm size etc...)

A: 

I'm curious, when TokyoCabinet loads 1B keys in 25 minutes what are the sizes of the keys/values being stored? What's the I/O systems and the storage system you're using? Are you using the term "load" to mean 1B transactional commits to permanent stable storage? That would be ~666,666 inserts/second, which is physically impossible given any I/O system I'm aware of. Multiply that number times the key and value size and now you're hopelessly beyond physical limits.

Please take a look at Gustavo Duarte's blog, read a bit about I/O systems and how things work in hardware and then review your statement. I'm very interested in finding out what exactly TokyoCabinet is doing and what it isn't doing. If I had to guess I'd say that either it's committing to file-system cache in the operating system, but not flushing (fdsync()-ing) those buffers to disk.

Full Disclosure: I'm a product manager at Oracle for Oracle Berkeley DB (a direct competitor of TokyoCabinet), I've been playing with these databases and the best hardware around for them for about ten years so I'm both biased and skeptical.

Berkeley DB has flags you can set on the transaction handle which mimic this and other similar methods of trading off durability (the "D" in ACID) for speed.

As far as how to make Berkeley DB Java Edition (BDB-JE) faster you can try the following:

  • Deferred writes: this delays writing to the transaction log for as long as possible (when buffers are full, it flushes the data)
  • Sort your keys in advance: most B-Trees (ours included) do much better with in-order insertions for fast load times-
  • Increasing the size of the log files from the default of 10MiB to something larger, like 100MiB, this reduces I/O cost-

It's very important to be clear about claims of performance with databases. They seems simple, but it turns out to be very very tricky to get them right so that they don't ever corrupt data or lose committed transactions.

I hope this helps you a bit.

Gregory Burd