For performance reasons, what you'll likely want to do is have a periodic process build an index. There are very sophisticated ways to do this, but it's also possible to make something quite reasonably useful in a very simple way.
At heart, an "index" is the very same sort of thing you'd find at the end of a textbook, but translate that idea into a computer world. You'll want to scan through your tables of descriptions, and build a key/value "dictionary","hash", or whatever your language's equivelent structure is called. The keys will be the words you find in your description. The values will be an array (or list or whatever your language calls it), of urls in which that word could be found.
When you process a query, you break apart the words in the query, and look each one up in your dictionary. Then each "url" can get a point for every word that url contains. You then rank your results based on how many points each url has. Alternatively, you can return only results that contain all the words by performing a set intersection between all the various url arrays you find by looking up your words.
depending on what you are trying to achieve, you can get more sophisticated about how you construct your index, such as using phonetic representations of words as keys, instead of the raw words themselves. When you do a search, break the search terms into their phonetic representations, and in this way you can eliminate problems to do with common misspellings.
Alternatively you can address common misspellings directly by making duplicate keys for each word.
Alternatively, you can also index letter triplets rather than whole words, to catch alternative forms of words with different tenses and conjugations.
etc. etc.
You'll probably want to not construct this index on every query (otherwise, what's the point?), so you'll want to be able to save it to disk and load it (or parts of it) into memory during a query. Whether you use a database, or whatever for doing this, I leave up to you.