views:

906

answers:

2

I'm definitely a newbie to ruby (and using 1.9.1), so any help is appreciated. Everything I've learned about Ruby has been from using google. I'm trying to compare two arrays of hashes and due to the sizes, it's taking way to long and flirts with running out of memory. Any help would be appreciated.

I have a Class (ParseCSV) with multiple methods (initialize, open, compare, strip, output). The way I have it working right now is as follows (and this does pass the tests I've written, just using a much smaller data set):


file1 = ParseCSV.new(“some_file”)
file2 = ParseCSV.new(“some_other_file”)

file1.open #this reads the file contents into an Array of Hash’s through the CSV library 
file1.strip #This is just removing extra hash’s from each array index.  So normally there are fifty hash’s in each array index, this is just done to help reduce memory consumption.  

file2.open 
file2.compare(“file1.storage”) #@storage is The array of hash’s from the open method

file2.output

Now what I’m struggling with is the compare method. Working on smaller data sets it’s not a big deal at all, works fast enough. However in this case I’m comparing about 400,000 records (all read into the array of hashes) against one that has about 450,000 records. I’m trying to speed this up. Also I can’t run the strip method on file2. Here is how I’m doing it now:


def compare(x)
    #obviously just a verbose message
    puts "Comparing and leaving behind non matching entries"

    x.each do |row|
        #@storage is the array of hashes
        @storage.each_index do |y|       
            if row[@opts[:field]] == @storage[y][@opts[:field]]
               @storage.delete_at(y)
            end
       end
    end
end

Hopefully that makes sense. I know it’s going to be a slow process just because it has to iterate 400,000 rows 440,000 times each. But do you have any other ideas on how to speed it up and possibly reduce memory consumption?

+5  A: 

Yikes, that'll be O(n) squared runtime. Nasty.

A better bet would be to use the built in Set class.

Code would look something like:

require 'set'

file1_content = load_file_content_into_array_here("some_file")
file2_content = load_file_content_into_array_here("some_other_file")

file1_set = Set[file1_content]

unique_elements = file1_set - file2_content

That assumes that the files themselves have unique content. Should work in the generic case, but may have quirks depending on what your data looks like and how you parse it, but as long as the lines can be compared with == it should help you out.

Using a set will be MUCH faster than doing a nested loop to iterate over the file content.

(and yes, I have actually done this to process files with ~2 million lines, so it should be able to handle your case - eventually. If you're doing heavy data munging, Ruby may not be the best choice of tool though)

madlep
You will have to store something besides plain hashes as set will store two hashes that have the same contents but not the same object identity as two different items.
Kathy Van Stone
Treatment of hash equality depends on the version of Ruby. The code I provided is OK in Ruby 1.8.7 and up, but not for Ruby 1.8.6. Hash#eql? responds 'true' for two identical but different Hash objects in 1.8.7, but responds 'false' in Ruby 1.8.6
madlep
@Kathy: Are you sure? A `Set` is really just a `Hash` (if you look at the implementation you will see that the `Set` class just delegates to a `Hash` were the `Hash` keys are the `Set` values and the `Hash` values are all `true`). And I am pretty sure (in fact, I demonstrated this in an answer to another question two days ago) that the bug you mention has been fixed in Ruby 1.9. I just re-retested it and it does indeed seem to work.
Jörg W Mittag
@Joerg: It turns out I am using 1.8.6 (forgot how old my copy at work is), so I get sets like #<Set: {{1=>4, 2=>5}, {1=>4, 2=>5}}>. In general it is tricky to hash mutable objects whose value can vary with changes.
Kathy Van Stone
The idea here is to reduce the complexity of the solution, so this solution reduces it from O(N*N) to O(N).
khelll
A: 

Here's a script comparing two ways of doing it: Your original compare() and a new_compare(). The new_compare uses more of the built in Enumerable methods. Since they are implemented in C, they'll be faster.

I created a constant called Test::SIZE to try out the benchmarks with different hash sizes. Results at the bottom. The difference is huge.

require 'benchmark'

class Test
  SIZE = 20000
  attr_accessor :storage
  def initialize
    file1 = []
    SIZE.times { |x| file1 << { :field => x, :foo => x } }
    @storage = file1
    @opts = {}
    @opts[:field] = :field
  end

  def compare(x)
    x.each do |row|
      @storage.each_index do |y|
        if row[@opts[:field]] == @storage[y][@opts[:field]]
          @storage.delete_at(y)
        end
      end
    end
  end

  def new_compare(other)
    other_keys = other.map { |x| x[@opts[:field]] }
    @storage.reject! { |s| other_keys.include? s[@opts[:field]] }
  end

end

storage2 = []
# We'll make 10 of them match
10.times { |x| storage2 << { :field => x, :foo => x } }
# And the rest wont
(Test::SIZE-10).times { |x| storage2 << { :field => x+100000000, :foo => x} }

Benchmark.bm do |b|
  b.report("original compare") do
    t1 = Test.new
    t1.compare(storage2)
  end
end

Benchmark.bm do |b|
  b.report("new compare") do
    t1 = Test.new
    t1.new_compare(storage2)
  end
end

Results:

Test::SIZE = 500
      user     system      total        real
original compare  0.280000   0.000000   0.280000 (  0.285366)
      user     system      total        real
new compare  0.020000   0.000000   0.020000 (  0.020458)

Test::SIZE = 1000
     user     system      total        real
original compare 28.140000   0.110000  28.250000 ( 28.618907)
      user     system      total        real
new compare  1.930000   0.010000   1.940000 (  1.956868)

Test::SIZE = 5000
ruby test.rb
      user     system      total        real
original compare113.100000   0.440000 113.540000 (115.041267)
      user     system      total        real
new compare  7.680000   0.020000   7.700000 (  7.739120)

Test::SIZE = 10000
      user     system      total        real
original compare453.320000   1.760000 455.080000 (460.549246)
      user     system      total        real
new compare 30.840000   0.110000  30.950000 ( 31.226218)
hgimenez