views:

909

answers:

6

I'm generating some statistics for some english-language text and I would like to skip uninteresting words such as "a" and "the".

  • Where can I find some lists of these uninteresting words?
  • Is a list of these words the same as a list of the most frequently used words in English?

update: these are apparently called "stop words" and not "skip words".

+3  A: 

these are called stop words, check this sample

Ahmed Said
Your link is out, archive: http://web.archive.org/web/20080501010608/http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/stop_words
bobobobo
+10  A: 

The magic word to put into Google is "stop words". This turns up a reasonable-looking list.

MySQL also has a built-in list of stop words, but this is far too comprehensive to my tastes. For example, at our university library we had problems because "third" in "third world" was considered a stop word.

Thomas
+3  A: 

Get statistics about word frequency in large txt corpora. Ignore all words with frequency > some number.

Sean A.O. Harney
lol, this is just the work I'm trying to avoid!
Mark Harrison
There may be words that he would want to skip (because they're syntactic sugar in the English language) that are nonetheless not as common as words that he would want to keep (because they're typical to the domain). I can't think of any brilliant examples offhand, though. Maybe "thou" and "one"?
jprete
+2  A: 

I think I used the stopword list for German from here when I built a search application with lucene.net a while ago. The site contains a list for English, too, and the lists on the site are apparaently the ones that the lucene project use as default, too.

Robert Petermeier
+2  A: 

Depending on the subdomain of English you are working in, you may have/wish to compile your own stop word list. Some generic stop words could be meaningful in a domain. E.g. The word "are" could actually be an abbreviation/acronym in some domain. Conversely, you may want to ignore some domain specific words depending on your application which you may not want to ignore in the domain of general English. E.g. If you are analyzing a corpus of hospital reports, you may wish to ignore words like 'history' and 'symptoms' as they would be found in every report and may not be useful (from a plain vanilla inverted index perspective).

Otherwise, the lists returned by Google should be fine. The Porter Stemmer uses this and the Lucene seach engine implementation uses this.

hashable
+2  A: 

Typically these words will appear in documents with the highest frequency. Assuming you have a global list of words:

{ Word Count }

With the list of words, if you ordered the words from the highest count to the lowest, you would have a graph (count (y axis) and word (x axis) that is the inverse log function. All of the stop words would be at the left, and the stopping point of the "stop words" would be at where the highest 1st derivative exists.

This solution is better than a dictionary attempt:

  • This solution is a universal approach that is not bound by language
  • This attempt learns what words are deemed to be "stop words"
  • This attempt will produce better results for collections that are very similar, and produce unique word listings for items in the collections
  • The stop words can be recalculated at a later time (with this there can be caching and a statistical determination that the stop words may have changed from when they were calculated)
  • This can also eliminate time based or informal words and names (such as slang, or if you had a bunch of documents that had a company name as a header)

The dictionary attempt is better:

  • The lookup time is much faster
  • The results are precached
  • Its simple
  • Some else came up with the stop words.
monksy