views:

419

answers:

4

I use and love Berkeley but it seems to bog down once you get near a million or so entries, especially on the inserts. I've tried memcachedb which works but it's not being maintained so I'm worried of using it in production. Does anyone have any other similar solutions, basically I want to be able to do key lookups on a large(possibly distributed) dataset(40+ million).

Note: Anything NOT in Java is a bonus. :-) It seems most things today are going the Java route.

+1  A: 

Have you tried Project Voldemort?

Avi
Looks like an interest project.
RichardOD
Hmm, never heard of that. Any idea what they use for the database part? There is not a whole lot of info on it so it's hard to judge against other solutions.
Ryan Detzel
From the configuration page:persistence— The persistence backend used by the store. Currently this could be one of bdb, mysql, memory, readonly, and cache. The difference between cache and memory is that memory will throw and OutOfMemory exception if it grows larger than the JVM heap whereas cache will discard data.
consultutah
A: 

did you try the hash backend? that should be faster for insert and key search http://mobiphil.com

mobi phil
+1  A: 

I would suggest you had a look at:

Metabrew key-value store blog post

There is a big list of key-value stores with a little bit of discussion in each of them. If you still have doubts you could join the so called Nosql google group and ask for help there.

Marc
+1  A: 

Redis is insanely fast and actively developed. It is written in C(no java). Compiles out of the box on POSIX OS(no dependencies).

Alfred