views:

597

answers:

2

What is happening to such distributed in-memory cloud databases as

  1. Hazelcast
  2. Scalaris

if there is more Data to store than RAM in the cluster?

Are they going to Swap? What if the Swap space is full? I can't see a disaster recovery strategy at both databases! Maybe all data is lost if the memory is full?

Is there a availability to write things down to the hard-disk for memory issues? Are there other databases out there, which offer the same functionality as Hazelcast or Scalaris with backup features / hdd-storage / disaster recovery?

+1  A: 

Regarding to the teams of Hazelcast and Scalaris, they say both, that writing more Data than RAM is available isn't supported.

The Hazlecast team is going to write a flatfile store in the near future.

Martin K.
+2  A: 

I don't know what the state of affairs was when the accepted answer by Martin K. was published, but Scalaris FAQ now claims that this is supported.

Can I store more data in Scalaris than ram+swapspace is available in the cluster?

Yes. We have several database backends, e.g. src/db_ets.erl (ets) and src/db_tcerl (tokyocabinet). The former uses the main memory for storing data, while the latter uses tokyocabinet for storing data on disk. With tokycoabinet, only your local disks should limit the total size of your database. Note however, that this still does not provide persistency.

For instructions on switching the database backend to tokyocabinet see Tokyocabinet.

ykaganovich