This is an open question in NLP, so there is no simple answer.
My recommendation for quick-and-dirty "works-for-me" is topia.termextract.
Yahoo has a keyword extraction service (http://developer.yahoo.com/search/content/V1/termExtraction.html) which is low recall but high precision. In other words, it gives you a small number of high quality terms, but misses many of the terms in your documents.
In Python, there is topia.termextract (http://pypi.python.org/pypi/topia.termextract/). It is relatively noisy, and proposes many bogus keywords, but it simple to use.
Termine (http://www.nactem.ac.uk/software/termine/) is a UK webservice that also is relatively noisy, and proposes many bogus keywords. However, it appears to me to be slightly more accurate than topia.termextract. YMMV.
One way to denoise results with too many keywords (e.g. topia.termextract and termine) is to create a vocabulary of terms that occur frequently, and then throw out proposed terms that are not in the vocabulary. In other words, do two passes over your corpus: The first pass, count the frequency of each keywords. In the second pass, discard the keywords that are too rare.
If you want to write your own, perhaps the best introduction is written by Park, who is now at IBM:
- "Automatic glossary extraction: beyond terminology identification" available at http://portal.acm.org/citation.cfm?id=1072370
- "Glossary extraction and utilization in the information search and delivery system for IBM technical support"
Here are some more references, if you want to learn more:
- http://en.wikipedia.org/wiki/Terminology_extraction
- "CorePhrase: Keyphrase Extraction for Document Clustering"
- Liu et al 2009 from NAACL HLT
- "Automatic Identification of Non-compositional Phrases"
- "Data Mining Meets Collocations Discovery"
- As well as a host of other references you can dig up on the subject.