views:

46

answers:

3

Lets say I have a database filled with people with the following data elements:

  • PersonID (meaningless surrogate autonumber)
  • FirstName
  • MiddleInitial
  • LastName
  • NameSuffix
  • DateOfBirth
  • AlternateID (like an SSN, Militarty ID, etc)

I get lots of data feeds in from all kinds of formats with every reasonable variation on these pieces of information you could think of. Some examples are:

  • FullName, DOB
  • FullName, Last 4 SSN
  • First, Last, DOB

When this data comes in, I need to write something to match it up. I don't need, or expect, to get more than an 80% match rate. After the automated match, I'll present the uncertain matches on a web page for someone to manually match.

Some of the complexities are:

  1. Some data matches are better than others and I would like to assign weight to those. For example, if the SSN matches exactly but the name is off because someone goes by their middle name, I would like to assign a much higher confidence value to that match than if the names match exactly but the SSNs are off.
  2. The name matching has some difficulties. John Doe Jr is the same as John Doe II, but not the same as John Doe Sr. and if I get John Doe and no other information, I need to be sure the system doesn't pick one because there's no way to determine who to pick.
  3. First name matching is really hard. You have Bob/Robert, John/Jon/Jonathon, Tom/Thomas, etc.
  4. Just because I have a feed with FullName+DOB doesn't mean the DOB field is filled for every record. I don't want to miss a linkage just because the unmatched DOB kills the matching score. If a field is missing, I want to exclude it from the elements available for matching.
  5. If someone manually matches, I want their match to affect all future matches. So, if we ever get the same exact data again, there's no reason not to automatically match it up next time.

I've seen that SSIS has fuzzy matching, but we don't use SSIS currently, and I find it pretty cludgy and nearly impossible to version control so it's not my first choice of tool. But if it's the best there is, tell me. Otherwise, are there any (preferably free, preferably .NET or T-SQL based) tools/libraries/utilities/techniques out there that you've used for this type of problem?

A: 

Take a look at the Levenshtein Algoritm, which allows you to get 'the distance between two strings,' which can then be divided into the length of the string to get a percentage match.

http://en.wikipedia.org/wiki/Levenshtein_distance

I have previously implemented this to great success. It was a provider portal for a healthcare company, and providers registered themselves on the site. The matching was to take their portal registration and find the corresponding record in the main healthcare system. The processors who attended to this were presented with the most likely matches, ordered by percentage descending, and could easily choose the right account.

Fosco
Good answer. I've already got that one in my tool belt in case I wind up building this from scratch. I'm looking to not build COMPLETELY from scratch though :-)
mattmc3
It should not take too long to implement this.. only took a day or two when I used it.
Fosco
What took a day or two, the Levenshtein algorithm or a full on demographic comparer with all the nuances of name semantics, matching scores, and confidence ratings? I think you may be making this too simplistic. A generic string comparing algorithm, while perhaps a piece of the solution, isn't going to be enough.
mattmc3
I have to agree that I am simplifying things.. That's what I do.. The OP takes it quite a bit further than we did. We did not weight fields differently, just took the source fields together against the target fields together through one pass of Levenshtein and divided by the string length. With Levenshtein written as a SQL function, the whole compare and result process is a single query.
Fosco
A: 

There are a number of ways that you can go about this, but having done this type of thing before i will go ahead and put out here that you run a lot of risk in having "incorrect" matches between people.

Your input data is very sparse, and given what you have it isn't the most unique, IF not all values are there.

For example with your First Name, Last Name, DOB situation, if you have all three parts for ALL records, then the matching gets a LOT easier for you to work with. If not though you expose yourself to a lot of potential for issue.

One approach you might take, on the more "crude" side of things is to simply create a process using a series of queries that simply identifies and classifies matching entries.

For example first check on an exact match on name and SSN, if that is there flag it, note it as 100% and move on to the next set. Then you can explicitly define where you are fuzzy so you know the potential ramification of your matching.

In the end you would have a list with flags indicating the match type, if any for that record.

Mitchel Sellers
That is an approach I've seen used in the past. The risk of false positives is high and the code is brittle just because it's hard to account for all the combos. Also, the data we have in our system is pretty complete... the data we get from other vendors, is, well... from other people.
mattmc3
your risk of false positive will be high, no matter what the approach is. But if you work with a methodical approach and build it out, you will be able to at least know what/why/how the matching is done and mitigate false positives.
Mitchel Sellers
A: 

If the false positives don't bug you and your languages is primarily English you can try algorithms like soundex. MS SQL server has it as a built in function. Soundex isn't the best, but it does do a fuzzy matching and is popular. Another alternative is metaphone

Beached