Response to Gangadhar (sorry, not enough space in the comments box):
I like your response about divide and conquer, but I have a caveat to add. Mapreduce doesn't handle the recursive aspect of divide and conquer algorithms very well because the combination of subproblems for certain d/c algorithms depends on eventually reducing to one key. Take the merge sort algorithm for example (ignoring for the moment that sorting is trivial in Hadoop due to the key ordering properties of its Mapreduce implementation): you would use your mapper to partition your data into two-chunks, using an arbitrary id for a key. your reduce phase would merge the value lists for its respective keys together.
7 3 2 4 1 5 8 9 would map to
1->[[7],[3]] 2->[[2],[4]] 3->[[1],[5]] 4->[[8],[9]] would reduce to
3,7 2,4 1,5 8,9 would map to
1->[[3,7],[2,4]] 2->[[1,5],[8,9]]
etc.
You see that the number of keys is reduced by two (or whatever chunking factor you use for your d/c algorithm) and eventually should get down to one key for a sorted list. This is bad for parallelism.
So the divide aspect of divide and conquer is clearly necessary for mapreduce, but we also need data parallelism in our conquest phase such that your result is some collection of data parallel items. FFT is a good example of a divide and conquer algorithm that works well with mapreduce.