tags:

views:

125

answers:

2

I am asking this as I am wondering if it could be efficient to run mapreduce queries over a database or a shared keyvalue store?

For example, to implement a web trawler, which indexes the internet and counts all the terms on different web pages, could this be done efficiently with a database as a backend?

+1  A: 

A database is not an adequate solution for a WebCrawler style of backhand.

You might want to read this article.

http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-query-terabytes-data

Thanks, N.

NPayette
I read the article, but why does it mean a database couldn't handle the load?
Zubair
+1  A: 

Sure. HBase and other NoSql stores are well suited for this task.

See this article for a general overview of using HBase with MapReduce.

HBase is the Hadoop database. Use it when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.

HBase is an open-source, distributed, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop. HBase includes:

Convenient base classes for backing Hadoop MapReduce jobs with HBase tables

•Query predicate push down via server side scan and get filters

•Optimizations for real time queries

•A high performance Thrift gateway •A REST-ful Web service gateway that supports XML, Protobuf, and binary data encoding options

•Cascading source and sink modules

•Extensible jruby-based (JIRB) shell

•Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX

GalacticJello