This is called lemmatization, and what you call the "base of a word" is called a lemma. morpha
and its reimplementation in the Stanford POS tagger do this. Both, however, require POS tagged input to resolve the inherent ambiguity in natural language.
(POS tagging means determining the word categories, e.g. noun, verb. I've been assuming you want a tool that handles English.)
Edit: since you're going to use this for search, here's a few tips:
- Simple stemming for English has a mixed reputation in the search engine world. Sometimes it works, often it doesn't.
- Automatic spelling correction may work better. This is what Google does. It's expensive in terms of computing time, though, if you want to do it right.
- Lemmatization may provide benefits, but probably only if you index and search for both the words and the lemmas. (Same advice goes for stemming.)
- Here's a plugin for Lucene that does lemmatization.
(Preceding remarks are based on my own research; I wrote my master's thesis about lemmatization in search engines for very noisy data.)