views:

65

answers:

2

Hi,

I'm looking for ideas on how to best match two hash tables containing string key/value pairs.

Here's the actual problem I'm facing: I have structured data coming in which is imported into the database. I need to UPDATE records which are already in the DB, however, it's possible that ANY value in the source can change, therefore I don't have a reliable ID.

I'm thinking of fuzzy matching two rows, source and DB and make an "educated" guess if it should be updated or inserted.

Any ideas would be greatly appreciated.

Solution

Solution is based on Ben Robinson's post. Works pretty well, allows to have small mismatches here and there and custom key based weights.

require 'rubygems'
require 'amatch'

class Hash
  def fuzzy_match(hash, key_weights = {})
    sum_total = 0
    sum_weights = 0

    self.keys.each do |key|
      weight = key_weights[key] || 1
      next if weight == 0

      weight *= 10000
      match = self[key].to_s.levenshtein_similar(hash[key].to_s) * weight
      sum_total += match
      sum_weights += weight
    end

    sum_total.to_f / sum_weights.to_f
  end
end
+1  A: 

I have used the Levenshtein Distance to do fuzzy matching recently. I compute the distance between two possible matched strings and give the match a score that is the inverse of the distance. I then do a weighted average of the scores across the fields to determine a score for the record and allow more important fields to count more heavily than less important fields. It is used in a CRM application where there were leads coming in from many different sources. The client needed to match these against existing leads/opportunities/cleints/resellers etc. It took a bit of adjusting of the thresholds of what score was a match and what wasn't. In the end we got about a 1% false positive rate which i think is quite good really.

Ben Robinson
Could you give a bit of seudo code?
alex
The wikipedia article gives a pseudocode example of the Levenshtein Distance algorythm. A weighted average/mean is just like a normal mean, but instead of adding up the values and dividing by the number of values, you add up the totals of each value multiplied by it's weight, then you divide the total, by the sum of the wieghts.
Ben Robinson
Peter Norvig implementation of the Damerau-Levenshtein distance algorithm is pretty well self-explanatory: http://norvig.com/spell-correct.html
Matthieu M.
A: 

If you are importing data in SQL Server, SSIS has a fuzzy match task. Try it to see if you like the results. We've found it really helpful in situations like this.

HLGEM