views:

118

answers:

3

I want to check in a Python program if a word is in the English dictionary.

I believe nltk wordnet interface might be the way to go but I have no clue how to use it for such a simple task.

def is_english_word(word):
    pass # how to I implement is_english_word?

is_english_word(token.lower())

In the future, I might want to check if the singular form of a word is in the dictionary (e.g., properties -> property -> english word). How would I achieve that?

Thanks!

+1  A: 

Using a set to store the word list because looking them up will be faster:

with open("english_words.txt") as word_file:
    english_words = set(word.strip().lower() for word in word_file)

def is_english_word(word):
    return word.lower() in english_words

print is_english_word("ham")  # should be true if you have a good english_words.txt

To answer the second part of the question, the plurals would already be in a good word list, but if you wanted to specifically exclude those from the list for some reason, you could indeed write a function to handle it. But English pluralization rules are tricky enough that I'd just include the plurals in the word list to begin with.

As to where to find English word lists, I found several just by Googling "English word list". Here is one: http://www.sil.org/linguistics/wordlists/english/wordlist/wordsEn.txt You could Google for British or American English if you want specifically one of those dialects.

kindall
If you make `english_words` a `set` instead of a `list`, then `is_english_word` will run a lot faster.
dan04
I actually just redid it as a dict but you're right, a set is even better. Updated.
kindall
You can also ditch `.xreadlines()` and just iterate over `word_file`.
FogleBird
Yes, another good suggestion -- taken.
kindall
Thanks for your answer. The reason why I wanted to use wordnet was because I could not find any standard/obvious list of English words including plural. Where would I find such files (with plural included)?
Barthelemy
Under ubuntu the packages `wamerican` and `wbritish` provide American and British English word lists as `/usr/share/dict/*-english`. The package info gives http://wordlist.sourceforge.net as a reference.
intuited
+8  A: 

For (much) more power and flexibility, use a dedicated spellchecking library like PyEnchant. There's a tutorial, or you could just dive straight in:

>>> import enchant
>>> d = enchant.Dict("en_US")
>>> d.check("Hello")
True
>>> d.check("Helo")
False
>>> d.suggest("Helo")
['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', "He'll"]
>>>

PyEnchant comes with a few dictionaries (en_GB, en_US, de_DE, fr_FR), but can use any of the OpenOffice ones if you want more languages.

There appears to be a pluralisation library called inflect, but I've no idea whether it's any good.

katrielalex
Thank you, I did not know about PyEnchant and it is indeed much more useful for the kind of checks I want to make.
Barthelemy
A: 

For a semantic web approach, you could run a sparql query against WordNet in RDF format. Basically just use urllib module to issue GET request and return results in JSON format, parse using python 'json' module. If it's not English word you'll get no results.

As another idea, you could query Wiktionary's API.

burkestar