views:

2838

answers:

11

NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include:

  • Cassandra (tabular, written in Java, used by Cisco WebEx, Digg, Facebook, IBM, Mahalo, Rackspace, Reddit and Twitter)
  • CouchDB (document, written in Erlang, used by BBC and Engine Yard)
  • Dynomite (key-value, written in Erlang, used by Powerset)
  • HBase (key-value, written in Java, used by Bing)
  • Hypertable (tabular, written in C++, used by Baidu)
  • Kai (key-value, written in Erlang)
  • MemcacheDB (key-value, written in C, used by Reddit)
  • MongoDB (document, written in C++, used by Electronic Arts, Github, NY Times and Sourceforge)
  • Neo4j (graph, written in Java, used by some Swedish universities)
  • Project Voldemort (key-value, written in Java, used by LinkedIn)
  • Redis (key-value, written in C, used by Craigslist, Engine Yard and Github)
  • Riak (key-value, written in Erlang, used by Comcast and Mochi Media)
  • Ringo (key-value, written in Erlang, used by Nokia)
  • Scalaris (key-value, written in Erlang, used by OnScale)
  • Terrastore (document, written in Java)
  • ThruDB (document, written in C++, used by JunkDepot.com)
  • Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site))

I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used.

Questions:

  • What scalability problems have you used NoSQL data stores to solve?
  • What NoSQL data store did you use?
  • What database did you use prior to switching to a NoSQL data store?

I'm looking for first-hand experiences, so please do not answer unless you have that.

+1  A: 

I don't. I would like to use a simple and free key-value store that I can call in process but such thing doesn't exist afaik on the Windows platform. Now I use Sqlite but I would like to use something like Tokyo Cabinet. BerkeleyDB has license "issues".

However if you want to use the Windows OS your choice of NoSQL databases is limited. And there isn't always a C# provider

I did try MongoDB and it was 40 times faster than Sqlite, so maybe I should use it. But I still hope for a simple in process solution.

Theo
A C# provider is mostly irrelevant, as these systems do NOT have an interface which looks anything like a conventional database (hence "NoSQL") so an ADO.NET interface would be a round peg into a square hole.
MarkR
Indeed you don't need an provider that implements the ADO.NET interface but you still need some kind of driver/provider to couple between the db and .NET . There is one for MongoDB but it isn't perfect yet. The exception handling for instance needs improvement.
Theo
I have an open source c# client for redis @ http://code.google.com/p/servicestack/wiki/ServiceStackRedis it allows you to store 'typed POCOs' as text blobs and provides IList<T> and ICollection<T> interfaces for redis server-side lists and sets, etc.
mythz
+2  A: 

I have no first-hand experiences., but I found this blog entry quite interesting.

Michel
+1  A: 

I used redis to store logging messages across machines. It was very easy to implement, and very useful. Redis really rocks

bugspy.net
+3  A: 

I apologize for going against your bold text, since I don't have any first-hand experience, but this set of blog posts is a good example of solving a problem with CouchDB.

CouchDB: A Case Study

Essentially, the textme application used CouchDB to deal with their exploding data problem. They found that SQL was too slow to deal with large amounts of archival data, and moved it over to CouchDB. It's an excellent read, and he discusses the entire process of figuring out what problems CouchDB could solve and how they ended up solving them.

TwentyMiles
+14  A: 

My current project actually.

Storing 18,000 objects in a normalised structure: 90,000 rows across 8 different tables. Took 1 minute to retrieve and map them to our Java object model, that's with everything correctly indexed etc.

Storing them as key/value pairs using a lightweight text representation: 1 table, 18,000 rows, 3 seconds to retrieve them all and reconstruct the Java objects.

In business terms: first option was not feasible. Second option means our app works.

Technology details: running on MySQL for both SQL and NoSQL! Sticking with MySQL for good transaction support, performance, and proven track record for not corrupting data, scaling fairly well, support for clustering etc.

Our data model in MySQL is now just key fields (integers) and the big "value" field: just a big TEXT field basically.

We did not go with any of the new players (CouchDB, Cassandra, MongoDB, etc) because although they each offer great features/performance in their own right, there were always drawbacks for our circumstances (missing/immature Java support).

Extra benefit of (ab)using MySQL - the bits of our model that do work relationally can be easily linked to our key/value store data.

Update: here's an example of how we represented text content, not our actual business domain (we don't work with "products") as my boss'd shoot me, but conveys the idea, including the recursive aspect (one entity, here a product, "containing" others). Hopefully it's clear how in a normalised structure this could be quite a few tables, e.g. joining a product to its range of flavours, which other products are contained, etc

Name=An Example Product
Type=CategoryAProduct
Colour=Blue
Size=Large
Flavours={nice,lovely,unpleasant,foul}
Contains=[
Name=Product2
Type=CategoryBProduct
Size=medium
Flavours={yuck}
------
Name=Product3
Type=CategoryCProduct
Size=Small
Flavours={sublime}
]
Brian
What where the two databases in question (sql and NoSQL)?
mavnn
Both were MySQL (I've edited my response to provide this info, I forgot it initially). Same DB, very different performance results from the SQL and NoSQL approaches. Very happy with key/value approach with MySQL.
Brian
Hi Brian, would it be possible to provide an example of the schema of your normalised structure and an example of the key-value pairs "schema"? We are also facing performance issues with a normalised structure and are currently considering two options: either denormalising our tables or moving towards a NoSQL data store. Due to the licensing and maintenance fees we are already paying, we would like to leverage on our current Oracle stack and therefore, are leaning towards a denormalised RDBMS solution. An example would be interesting!
tthong
@Brian: Since 4 of the examples are written IN java, which Java support features were missing or immature? I have no experience in this field, but that seems slightly surprising to me.
Jimmy
tthong - not sure how to concisely include our normalised schema but I've added an example of how we store our content in a single text field. It's a little contrived, I've not been able to include a real example as my boss'd go ballistic so any "problems" with this "data model" are most likely for that reason. I would advise benchmarking both Oracle and some other solutions, but if your organisation has good Oracle expertise, DBAs, backups, etc, it could be a really good option to consider
Brian
Jimmy - apologies for the confusion, I meant to say that immature Java support was an example of what we found. Other factors influenced us, e.g. we need an embedded option, we need a zero-cost solution for our distribution model (rules out BerkeleyDB, Neo4j, plus other sound tech choices). Generally, a key/value approach in an otherwise SQL model was the only thing to tick all our boxes.
Brian
@tthong, instead of storing the properties in a single text field, you can use an xmltype column in Oracle for storing an xml document. Oracle has rich xml possibilities. Using an xmltype can be a good compromise between a single text field and normalized relational storage. Oracle can index the elements in your xml document so it is possible to search on an element in your xml document without full table scans. You loose this possibility when you use a single text field.
Theo
How do you efficiently find all products that have a foul flavor?
hobodave
@hobodave, I think that that is impossible when you store your data in a single text field. Reporting becomes also hard.
Theo
@Theo: It's not impossible, there are solutions (Sphinx). My question is directed at Brian. I want to know how he does it in his application.
hobodave
@Brian: I'm assuming (perhaps incorrectly) that both Product2 and Product3 exist as first level products as well, thus they're duplicated. If this is the case, how do you handle updates to product information?
hobodave
Theo - indeed we tried XML too. It so happened it wasn't fast enough for our liking, but it's a good option where it meets peoples' needs.
Brian
Hobodave - a weakness of my illustration, but the child content is "proper" child content, not references to sibling content. And you have identified the main trade-off of a big text chunk: the sort of reporting you mention, finding all foul tasting products, would be hard. We don't need to do that but it's a very important concession for other folks who might be considering such a design. Ah the brave new world of NoSQL eh? :-)
Brian
+5  A: 

Todd Hoff's highscalability.com has a lot of great coverage of NoSQL, including some case studies.

The commercial Vertica columnar DBMS might suit your purposes (even though it supports SQL): it's very fast compared with traditional relational DBMSs for analytics queries. See Stonebraker, et al.'s recent CACM paper contrasting Vertica with map-reduce.

Update: And Twitter's selected Cassandra over several others, including HBase, Voldemort, MongoDB, MemcacheDB, Redis, and HyperTable.

Update 2: Rick Cattell has just published a comparison of several NoSQL systems in High Performance Data Stores. And highscalability.com's take on Rick's paper is here.

Jim Ferrans
You should also read http://cacm.acm.org/magazines/2010/1/55744-mapreduce-a-flexible-data-processing-tool/fulltext
ar
@ar: Thanks, that's a good link. The Vertica folks have generated a fair amount of controversy.
Jim Ferrans
+17  A: 

I've switched a small subproject from MySQL to couchdb, to be able to handle the load. The result was amazing.

About 2 years ago, we've released a self written software on http://www.ubuntuusers.de/ (which is probably the biggest german linux community website). The site is written in Python and we've added a WSGI middleware which was able to catch all exceptions and send them to another small mysql powered website. This small website used a hash to determine different bugs and stored the number of occurrences and the last occurrence as well.

Unfortunately, shortly after the release, the traceback-logger website wasn't responding anymore. We had some locking issues with the production db of our main site which was throwing exceptions nearly every request, as well as several other bugs, which we haven't explored during the testing stage. The server cluster of our main site, called the traceback-logger submit page several k times per second. And that was a way too much for the small server which hosted the traceback logger (it was already an old server, which was only used for development purposes).

At this time couchdb was rather popular, and so I decided to try it out and write a small traceback-logger with it. The new logger only consisted of a single python file, which provided a bug list with sorting and filter options and a submit page. And in the background I've started a couchdb process. The new software responded extremely quickly to all requests and we were able to view the massive amount of automatic bug reports.

One interesting thing is, that the solution before, was running on an old dedicated server, where the new couchdb based site on the other hand was only running on a shared xen instance with very limited resources. And I haven't even used the strength of key-values stores to scale horizontally. The ability of couchdb / Erlang OTP to handle concurrent requests without locking anything was already enough to serve the needs.

Now, the quickly written couchdb-traceback logger is still running and is a helpful way to explore bugs on the main website. Anyway, about once a month the database becomes too big and the couchdb process gets killed. But then, the compact-db command of couchdb reduces the size from several GBs to some KBs again and the database is up and running again (maybe i should consider adding a cronjob there... 0o).

In a summary, couchdb was surely the best choice (or at least a better choice than mysql) for this subproject and it does its job well.

tux21b
I think I read somewhere that you could make couchdb do the compression automatically when the uncompressed data reached a certain level...
Ztyx
+1  A: 

We replaced a postgres database with a CouchDB document database because not having a fixed schema was a strong advantage to us. Each document has a variable number of indexes used to access that document.

SorcyCat
+3  A: 

We moved part of our data from mysql to mongodb, not so much for scalability but more because it is a better fit for files and non-tabular data.

In production we currently store:

  • 25 thousand files (60GB)
  • 130 million other "documents" (350GB)

with a daily turnover of around 10GB.

The database is deployed in a "paired" configuration on two nodes (6x450GB sas raid10) with apache/wsgi/python clients using the mongodb python api (pymongo). The disk setup is probably overkill but thats what we use for mysql.

Apart from some issues with pymongo threadpools and the blocking nature of the mongodb server it has been a good experience.

Serbaut
+3  A: 

We've moved some of our data we used to store in Postgresql and Memcached into Redis. Key value stores are much better suited for storing hierarchical object data. You can store blob data much faster and with much less development time and effort than using an ORM to map your blob to a RDBMS.

I have an open source c# redis client that lets you store and retrieve any POCO objects with 1 line:

var customers = redis.Lists["customers"]; //Implements IList<Customer>
customers.Add(new Customer { Name = "Mr Customer" });

Key value stores are also much easier to 'scale-out' as you can add a new server and then partition your load evenly to include the new server. Importantly, there is no central server that will limit your scalability. (though you will still need a strategy for consistent hashing to distribute your requests).

I consider Redis to be a 'managed text file' on steroids that provides fast, concurrent and atomic access for multiple clients, so anything I used to use a text file or embedded database for I now use Redis. e.g. To get a real-time combined rolling error log for all our services (which has notoriously been a hard task for us), is now accomplished with only a couple of lines by just pre-pending the error to a Redis server side list and then trimming the list so only the last 1000 are kept, e.g:

var errors = redis.List["combined:errors"];
errors.Insert(0, new Error { Name = ex.GetType().Name, Message = ex.Message, StackTrace = ex.StackTrace});
redis.TrimList(errors, 1000);
mythz
+1  A: 

I find the effort to map software domain objects (e.g. aSalesOrder, aCustomer...) to two-dimensional relational database (rows and columns) takes a lot of code to save/update and then again to instantiate a domain object instance from multiple tables. Not to mention the performance hit of having all those joins, all those disk reads... just to view/manipulate a domain object such as a sales order or customer record.

We have switched to Object Database Management Systems (ODBMS). They are beyond the capabilities of the noSQL systems listed. The GemStone/S (for Smalltalk) is such an example. There are other ODBMS solutions that have drivers for many languages. A key developer benefit, your class hierarchy is automatically your database schema, subclasses and all. Just use your object oriented language to make objects persistent to the database. ODBMS systems provide an ACID level transaction integrity, so it would also work in financial systems.

peter ode