Map-reduce is an implementation. The coding interface which lets you use that implementation could use continuations; it's really a matter of how the framework and job control are abstracted. Consider declarative interfaces for Hadoop such as Pig, or declarative languages in general such as SQL; the machinery below the interface may be implemented in many ways.
For example, here's an abstracted Python map-reduce implementation:
def mapper(input_tuples):
"Return a generator of items with qualifying keys, keyed by item.key"
# we are seeing a partition of input_tuples
return (item.key, item) for (key, item) in input_items if key > 1)
def reducer(input_tuples):
"Return a generator of items with qualifying keys"
# we are seeing a partition of input_tuples
return (item for (key, item) in input_items if key != 'foo')
def run_mapreduce(input_tuples):
# partitioning is magically run across boxes
mapper_inputs = partition(input_tuples)
# each mapper is magically run on separate box
mapper_outputs = (mapper(input) for input in mapper_inputs)
# partitioning and sorting is magically run across boxes
reducer_inputs = partition(
sort(mapper_output for output in mapper_outputs))
# each reducer is magically run on a separate box
reducer_outputs = (reducer(input) for input in reducer_inputs)
And here's the same implementation using coroutines, with even more magical abstraction hidden away:
def mapper_reducer(input_tuples):
# we are seeing a partition of input_tuples
# yield mapper output to caller, get reducer input
reducer_input = yield (
item.key, item) for (key, item) in input_items if key > 1)
# we are seeing a partition of reducer_input tuples again, but the
# caller of this continuation has partitioned and sorted
# yield reducer output to caller
yield (item for (key, item) in input_items if key != 'foo')