views:

103

answers:

1

I am looking to use Berkeley DB to create a simple key-value storage system. The keys will be SHA-1 hashes, so they are in 160-bit address space. I have a simple server working, that was easy enough thanks to the fairly well written documentation from Berkeley DB website. However, I have some questions about how best to set up such a system, to get good performance and flexibility. Hopefully, someone has had more experience with Berkeley DB and can help me.

The simplest setup is a single process, with a single thread, handling a single DB; inserts and gets are performed on this one DB, using transactions.

Alternative 1: single process, multiple threads, single DB; inserts and gets are performed on this DB, by all the threads in the process.

  • Does using multiple threads provide much performance improvements? There is one single DB, and therefore it's on one disk, and therefore I am guessing I won't get too much boost. But if Berkeley DB caches a lot of stuff in memory, then perhaps one thread will be able to run and answer from cache while another has blocked waiting for disk? I am using GNU Pth, user level cooperative threading. I am not familiar with the details of Pth, so I am also not sure if with Pth you can have a userlevel thread run while another userlevel thread has blocked.

Alternative 2: single process, one or multiple threads, multiple DBs where each DB covers a fraction of the 160-bit address space for keys.

  • I see a few advantages in having multiple DBs: we can put them on different disks, less contention, easier to move/partition DBs onto different physical hosts if we want to do that. Does anyone have experience with this setup and see significant benefits?

Alternative 3: multiple processes, each with one thread, each handles a DB that covers a fraction of the 160-bit address space for keys.

  • This has the advantages of using multiple DBs, but we are using multiple processes. Is this better than the second alternative? I suspect using processes rather than user-level threads to get parallelism will get you better SMP caching behaviors (less invalidates, etc), but will I get killed with all the process overheads and context switches?

I would love to hear if someone has tried the options, and have seen positive or negative results.

Thanks.

A: 

Alternative 2 gives you high scalability. You basically partition your database across multiple servers. If you need a high performance distributed key/value database, I would suggest looking at membase. I am doing that right now but we need to run on an appliance and would like to limit dependencies (of membase). You can use BerkeleyDB replication and have read only copies with servers to serve read/get requests.

hackworks