views:

2125

answers:

7

I have a database of strings (arbitrary length) which holds more than one million items (potentially more).

I need to compare a user-provided string against the whole database and retrieve an identical string if it exists or otherwise return the closest fuzzy match(es) (60% similarity or better). The search time should ideally be under one second.

My idea is to use edit distance for comparing each db string to the search string after narrowing down the candidates from the db based on their length.

However, as I will need to perform this operation very often, I'm thinking about building an index of the db strings to keep in memory and query the index, not the db directly.

Any ideas on how to approach this problem differently or how to build the in-memory index?

A: 

Since the amount of data is large, when inserting a record I would compute and store the value of the phonetic algorithm in an indexed column and then constrain (WHERE clause) my select queries within a range on that column.

rhinof
A: 

Compute the SOUNDEX hash (which is built into many SQL database engines) and index by it.

SOUNDEX is a hash based on the sound of the words, so spelling errors of the same word are likely to have the same SOUNDEX hash.

Then find the SOUNDEX hash of the search string, and match on it.

Oddthinking
Soundex cannot see through many misspellings or other variants. It works well on names but not on arbitrary strings.
reinierpost
Interesting. I didn't know it was focussed on names. I knew NYIIS was. (http://en.wikipedia.org/wiki/New_York_State_Identification_and_Intelligence_System)
Oddthinking
+3  A: 

This paper seems to describe exactly what you want.

Lucene (http://lucene.apache.org/) also implements Levenshtein edit distance.

zaratustra
The first link appears to have gone. :-/
Simon Nickerson
I emailed a contact, to see if we can track down zarawesome and fix this link. Unfortunately no direct email was provided, so..
Jeff Atwood
Sorry, yeah, I don't remember what the paper was about. I suggest you search for "Levenshtein edit distance" and see what comes up.
zaratustra
+1  A: 

You didn't mention your database system, but for PostrgreSQL you could use the following contrib module: trgm - Trigram matching for PostgreSQL

The pg_trgm contrib module provides functions and index classes for determining the similarity of text based on trigram matching.

Endlessdeath
A: 

If your database supports it, you should use full-text search. Otherwise, you can use an indexer like lucene and its various implementations.

Osama ALASSIRY
A: 

Implement the concept of a binary search tree sometime?

sorry if this is such a lame post -_-

Devoted
Are you saying OP should remove 1M words from database, create a binary tree, then search against it? That's O(log N) for straight lookup vs O(1) using an index, and cannot handle Levenshtein...
Alabaster Codify
You're right. -1
Ian
A: 

A very extensive explanation of relevant algorithms is in the book Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology by Dan Gusfield.

reinierpost