As per this comment in a related thread, I'd like to know why Levenshtein distance based methods are better than Soundex.
Soundex is rather primitive - it was originally developed to be hand calculated. It results in a key that can be compared.
Soundex works well with western names, as it was originally developed for US census data. It's intended for phonetic comparison.
Levenshtein distance looks at two values and produces a value based on their similarity. It's looking for missing or substituted letters.
Basically Soundex is better for finding that "Schmidt" and "Smith" might be the same surname.
Levenshtein distance is better for spotting that the user has mistyped "Levnshtein" ;-)
As I posted on the other question, Daitch-Mokotoff is better for us Europeans (and I'd argue the US).
I've also read the Wiki on Levenshtein. But I don't see why (in real life) it's better for the user than Soundex.
I agree with you on Daitch-Mokotoff, Soundex is biased because the original US census takers wanted 'Americanized' names.
Maybe an example on the difference would help:
Soundex puts addition value in the start of a word - in fact it only considers the first 4 phonetic sounds. So while "Schmidt" and "Smith" will match "Smith" and "Wmith" won't.
Levenshtein's algorithm would be better for finding typos - one or two missing or replaced letters produces a high correlation, while the phonetic impact of those missing letters is less important.
I don't think either is better, and I'd consider both a distance algorithm and a phonetic one for helping users correct typed input.
With Levensthein Im trying to find spelling mistakes by looking up a txt file filled with words against it, Ive got to say most of the time even if it is spelled correctly ill always get a difference.