It's an implementation of Sieve of Eratosthenes.
class PrimeGenerator
def self.get_primes_between( x, y)
sieve_array = Array.new(y) {|index|
(index == 0 ? 0 : index+1)
}
position_when_we_can_stop_checking = Math.sqrt(y).to_i
(2..position_when_we_can_stop_checking).each{|factor|
sieve_array[(factor).. (y-1)].each{|number|
sieve_array[number-1] = 0 if isMultipleOf(number, factor)
}
}
sieve_array.select{|element|
( (element != 0) && ( (x..y).include? element) )
}
end
def self.isMultipleOf(x, y)
return (x % y) == 0
end
end
Now I did this for a 'submit solutions to problems since you have time to kill' site. I chose ruby as my impl language.. however i was declared timed out. I did some benchmarking
require 'benchmark'
Benchmark.bmbm do |x|
x.report ("get primes") { PrimeGenerator.get_primes_between(10000, 100000)}
end
ruby 1.9.1p0 (2009-01-30 revision 21907) [i386-mswin32]
L:\Gishu\Ruby>ruby prime_generator.rb
Rehearsal ----------------------------------------------
get primes 33.953000 0.047000 34.000000 ( 34.343750)
------------------------------------ total: 34.000000sec
user system total real
get primes 33.735000 0.000000 33.735000 ( 33.843750)
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
Rehearsal ----------------------------------------------
get primes 65.922000 0.000000 65.922000 ( 66.110000)
------------------------------------ total: 65.922000sec
user system total real
get primes 67.359000 0.016000 67.375000 ( 67.656000)
So I redid the thing in C# 2.0 / VS 2008 --> 722 milliseconds
So now this prods me into thinking is it a problem with my implementation or is the perf diff between languages this wide? (I was amazed with the 1.9 Ruby VM... until I had to go compare it with C# :)
UPDATE: Turned out to be my "put-eratosthenes-to-shame-adaptation" after all :) Eliminating unnecessary loop iterations was the major optimization. In case anyone is interested in the details.. you can read it here; this question is too long anyways.