views:

254

answers:

5

I am looking for a Java library to do some initial spell checking / data normalization on user generated text content, imagine the interests entered in a Facebook profile.

This text will be tokenized at some point (before or after spell correction, whatever works better) and some of it used as keys to search for (exact match). It would be nice to cut down misspellings and the like to produce more matches. It would be even better if the correction would perform well on tokens longer than just one word, e.g. "trinking coffee" would become "drinking coffee" and not "thinking coffee".

I found the following Java libraries for doing spelling correction:

  1. JAZZY does not seem to be under active development. Also, the dictionary-distance based approach seems inadequate because of the use of non-standard language in social network profiles and multi-word tokens.
  2. APACHE LUCENE seems to have a statistical spell checker that should be much more suited. Question here would how to create a good dictionary? (We are not using Lucene otherwise, so there is no existing index.)

Any suggestions are welcome!

A: 

Try Peter Norvig's spell checker.

duffymo
I really like Norvig's little spell checker, thats awesome work! However the question boils down to selecting the right text corpus (just like with the more advanced LUCENE). Taking frequencies from the freely available work of Shakespeare won't help correcting social network profiles.
dareios
So you're saying that "trinking" instead of "drinking" isn't addressed? I'll have to re-read Norvig's article and perhaps implement it for myself, because I thought it could help.
duffymo
I was referring to the problem of selecting the right corpus (to get the right frequencies, e.g. not the frequencies from English literature but the ones suitable for variable-quality social network data).If I understand Norvig's code correctly it only takes edit distances for single words up to 2 into account. That means it will work surprisingly good for single words and not at all for multiple word tokens.
dareios
A: 

You can hit the Gutenberg project or the Internet Archive for lots and lots of corpus.

Also, I think that the Wiktionary could help you. You can even make a direct download.

mlaverd
+2  A: 

What you want to implement is not spelling corrector but a fuzzy search. Peter Norvig's essay is a good starting point to build a fuzzy search from candidates checked against a dictionary.

Alternatively have a look at BK-Trees.

An n-gram index (used by Lucene) produces better results for longer words. The approach to produce candidates up to a given edit distance will probably work good enough for words found in normal text but will not work good enough for names, addresses and scientific texts. It will increase you index size, though.

If you have the texts indexed you have your text corpus (your dictionary). Only what is in your data can be found anyway. You need not use an external dictionary.

A good resource is Introduction to Information Retrieval - Dictionaries and tolerant retrieval . There is a short description of context sensitive spelling correction.

Thomas Jung
Thank you for your insightful comment and the interesting book link.You are right, what I really want is fuzzy search. However I will see how/if spell checking works for my particular application (maybe its good enough right now) and revisit the ideas you mentioned later. Thanks a lot!
dareios
A: 

With regards to populating a Lucene index as the basis of a spell checker, this is a good way to solve the problem. Lucene has an out the box SpellChecker you can use.

There are plenty of word dictionaries available on the net that you can download and use as the basis for your lucene index. I would suggest supplementing these with a number of domain specific texts as well e.g. if your users are medics then maybe supplement the dictionary with source texts from medical thesis and publications.

Joel
Thanks, I think building a Lucene index will be my second try after I tried if Jazzy works "good enough".
dareios
A: 

http://code.google.com/p/google-api-spelling-java is a good Java spell checking library, but I agree with Thomas Jung, that may not be the answer to your problem.

Michael Munsey
Thanks for the link anyways, interesting API!
dareios