A simple wordcount reducer in Ruby looks like this:
#!/usr/bin/env ruby
wordcount = Hash.new
STDIN.each_line do |line|
keyval = line.split("|")
wordcount[keyval[0]] = wordcount[keyval[0]].to_i+keyval[1].to_i
end
wordcount.each_pair do |word,count|
puts "#{word}|#{count}"
end
it gets in the STDIN all mappers intermediate values. Not from a specific key. So actually there is only ONE reducer for all (and not reducer per word or per set of words).
However, on Java examples I saw this interface that gets a key and list of values as inout. Which means intermidiate map values are grouped by key before reduced and reducers can run in parallel:
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
Is this a Java only feature? Or can I do it with Hadoop Streaming using Ruby?