views:

597

answers:

1

I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

+2  A: 

Yes, row size is still limited by available memory. This is because the compaction algorithm today de-serializes the entire row in memory before writing out the compacted SSTable.

This is currently aimed to be fixed in the 0.7 release. See CASSANDRA-16 for progress.

Another interesting link: CassandraLimitations

Schildmeijer