+3  A: 

Hi,

I have never tried it, but what about redis ?
Its homepage says (quoting) :

Redis is a key-value database. It is similar to memcached but the dataset is not volatile, and values can be strings, exactly like in memcached, but also lists and sets with atomic operations to push/pop elements.

In order to be very fast but at the same time persistent the whole dataset is taken in memory and from time to time and/or when a number of changes to the dataset are performed it is written asynchronously on disk. You may lost the last few queries that is acceptable in many applications but it is as fast as an in memory DB (Redis supports non-blocking master-slave replication in order to solve this problem by redundancy).

It seems ot answer some points you talked about, so maybe it might be helpful, in your case ?

If you try it, I'm pretty interested by what you find out, btw ;-)


As a sidenote : if you need to write all this to disk, maybe a cache system is not really what you need... afterall, if you are using memcached as a cache, you should be able to re-populate it on-demand, whenever it is necessary -- still, I admit, there might be some performance problems if you whole memcached cluster falls at once...

So, maybe some "more" key/value store oriented software could help ? Something like CouchDB, for instance ?
It will probably not be as fast as memcached, as data is not store in RAM, but on disk, though...

Pascal MARTIN
A: 

Have you looked at BerkeleyDB?

  • Fast, embedded, in-process data management.
  • Key/value store, non-relational.
  • Persistent storage.
  • Free, open-source.

However, it fails to meet one of your criteria:

  • BDB supports distributed replication, but the data is not partitioned. Each node stores the full data set.
Bill Karwin
+1  A: 

Take a look at the Apache Java Caching System (JCS)

JCS is a distributed caching system written in java. It is intended to speed up applications by providing a means to manage cached data of various dynamic natures. Like any caching system, JCS is most useful for high read, low put applications. Latency times drop sharply and bottlenecks move away from the database in an effectively cached system. Learn how to start using JCS.

The JCS goes beyond simply caching objects in memory. It provides numerous additional features:

* Memory management
* Disk overflow (and defragmentation)
* Thread pool controls
* Element grouping
* Minimal dependencies
* Quick nested categorical removal
* Data expiration (idle time and max life)
* Extensible framework
* Fully configurable runtime parameters
* Region data separation and configuration
* Fine grained element configuration options
* Remote synchronization
* Remote store recovery
* Non-blocking "zombie" (balking facade) pattern
* Lateral distribution of elements via HTTP, TCP, or UDP
* UDP Discovery of other caches
* Element event handling
* Remote server chaining (or clustering) and failover
* Custom event logging hooks
* Custom event queue injection
* Custom object serializer injection
* Key pattern matching retrieval
* Network efficient multi-key retrieval
Mads Hansen
A: 

We are using OSCache. I think it meets almost all your needs except periodically saving cache to the disk, but you should be able to create 2 cache managers (one memory based and one hdd based) and periodically run java cronjob that goes through all in-memory cache key/value pairs and puts them into hdd cache. What's nice about OSCache is that it is very easy to use.

serg
+3  A: 

Maybe your problem like mine: I have only a few machines for memcached, but with lots of memory. Even if one of them fails or needs to be rebooted, it seriously affects the performance of the system. According to the original memcached philosophy I should add a lot more machines with less memory each, but that's not cost efficient and not exactly "green IT" ;)

For our solution, we built a interface layer for the Cache system in a way that providers to underlying cache systems can be nested, like you can do with streams, and wrote a cache provider for memcached as well as our own very simple Key-Value-2-disk storage provider. Then we define a weight for cache items that represents how costly it is to rebuild an item if it cannot be retrieved from cache. The nested Disk cache is only used for items with a weight above a certain threshold, maybe around 10% of all items.

When storing an object in the cache, we won't loose time as saving to one or both caches is queued for asynchronous execution anyway. So writing to the disk cache doesn't need to be fast. Same for reads: First we go for memcached, and only if it's not there and it is a "costly" object, then we check the disk cache (which is by magnitudes slower than memcached, but still so much better then recalculating 30 GB of data after a single machine went down).

This way we get the best from both worlds, without replacing memcached by anything new.

markus
@markus Great approach on caching! Like your weight idea
Industrial
+1  A: 

What about Terracotta?

Artyom Sokolov
A: 

Hi, You can use GigaSpaces XAP which is a mature commercial product which answers your requirements and more. It is the fastest distributed in-memory data grid (cache++), it is fully distributed, and supports multiple styles of persistence methods.

Guy Nirpaz, GigaSpaces

gnirpaz
+3  A: 

EhCache has a "disk persistent" mode which dumps the cache contents to disk on shutdown, and will reinstate the data when started back up again. As for your other requirements, when running in distributed mode it replicates the data across all nodes, rather than storing them on just one. other than that, it should fit your needs nicely. It's also still under active development, which many other java caching frameworks are not.

skaffman
I used EhCache to build a set of persistent collections for Java and it works great.
Jonathan Barbero
+1  A: 

In my experience, it is best to write an intermediate layer between the application and the backend storage. This way you can pair up memcached instances and for example sharedanced (basically same key-value store, but disk based). Most basic way to do this is, always read from memcached and fail-back to sharedanced and always write to sharedanced and memcached.

You can scale writes by sharding between multiple sharedance instances. You can scale reads N-fold by using a solution like repcached (replicated memcached).

If this is not trivial for you, you can still use sharedanced as a basic replacement for memcached. It is fast, most of the filesystem calls are eventually cached - using memcached in combination with sharedance only avoids reading from sharedanced until some data expires in memcache. A restart of the memcached servers would cause all clients to read from the sharedance instance atleast once - not really a problem, unless you have extremely high concurrency for the same keys and clients contend for the same key.

There are certain issues if you are dealing with a severely high traffic environment, one is the choice of filesystem (reiserfs performs 5-10x better than ext3 because of some internal caching of the fs tree), it does not have udp support (TCP keepalive is quite an overhead if you use sharedance only, memcached has udp thanks to the facebook team) and scaling is usually done on your aplication (by sharding data across multiple instances of sharedance servers).

If you can leverage these factors, then this might be a good solution for you. In our current setup, a single sharedanced/memcache server can scale up to about 10 million pageviews a day, but this is aplication dependant. We don't use caching for everything (like facebook), so results may vary when it comes to your aplication.

Tit Petric