views:

80

answers:

1

I'm trying to write a free text search algorithm for finding specific posts on a wall (similar kind of wall as Facebook uses). A user is suppose to be able to write some words in a search field and get hits on posts that contain the words; with the best match on top and then other posts in decreasing order according to match score.

I'm using the edit distance (Levenshtein) "e(x, y) = e" to calculate the score for each post when compared to the query word "x" and post word "y" according to: score(x, y) = 2^(2 - e)(1 - min(e, |x|) / |x|), where "|x|" is the number of letters in the query word.

Each word in a post contributes to the total score for that specific post. This approach seems to work well when the posts are of roughly the same size, but sometime certain large posts manages to rack up score solely on having a lot of words in them while in practice not being relevant to the query.

Am I approaching this problem in the wrong way or is there some way to normalize the score that I haven't thought of?

+1  A: 

Yes. There are many normalization methods you could use. This is a well-researched field!

Take a look at the vector space model . TDF/IDF could be relevant to what you're doing. It's not strictly related to the method you're using but could give you some normalization leads.

Also note that comparing each post will be O(N) and could get very slow. Instead of string-distance, you may have better results with stemmming. You can then put that into a VSM inverted index.

Many databases (including MySQL and Postgres) have full-text search. That's probably more practical than doing it yourself.

Joe
Thank you, tf-idf looks promising. I just need to apply it to my problem since the search query I'm using can consist of several words where their occurrences should be more important if they exist in the same post. The number of posts in the wall is pretty modest (10000 posts max), but since I need to compare each search word with all the words in all posts I get O(N^3)... Maybe it's simpler to just use the full text search incorporated in the MS SQL 2008 data base instead. The reason I started looking into it was because I wanted a fuzzy word search, but maybe the data base can handle that?
MdaG
I don't know about MSSQL but the Postgres one is very good and very customizable. I have attempted to do something similar to you (fuzzy string matching over documents, but not text). The current solution is to split the fuzzy matching algorithm down the centre and put a vector-space search in the middle. Seems to work for me! folktunefinder.com
Joe