views:

1354

answers:

5

I'm trying to implement something like Google suggest on a web site I am building and am curious how to go about doing in on a very large data set. Sure if you've got 1000 items you cache the items and just loop through them. But how do you go about it when you have a million items? Further, suppose that the items are not one word. Specifically, I have been really impressed by Pandora.com. For example, if you search for "wet" it brings back "Wet Sand" but it also brings back Toad The Wet Sprocket. And their autocomplete is FAST. My first idea was to group the items by the first two letters, so you would have something like:

Dictionary<string,List<string>>

where the key is the first two letters. That's OK, but what if I want to do something similar to pandora and allow the user to see results that match the middle of the string? With my idea: Wet would never match Toad the Wet Sprocket because it would be in the "TO" bucket instead of the "WE" bucket. So then perhaps you could split the string up and "Toad the Wet Sprocket" go in the "TO", "WE" and "SP" buckets (strip out the word "THE"), but when you're talking about a million entries which may have say a few words each possibly, that seems like you'd quickly start using up a lot of memory. Ok that was a long question. Thoughts?

+3  A: 

As I pointed out in How to implement incremental search on a list you should use structures like a Trie or Patricia trie for searching patterns in large texts.

And for discovering patterns in the middle of some text there is one simple solution. I am not sure if it is the most efficent solution, but I usually do it as follows.

When I insert some new text into the Trie, I just insert it, then remove the first charachter, insert again, remove the second charachter, insert again ... and so on until the whole text is consumed. Then you can discover every substring of every inserted text by just one search from the root.

And it is really incredible fast. To find all texts that contain a given sequence of n characters you have to inspect at most n nodes and perform a search on the list of childs for every node. Depending on the implementation (array, list, binary tree, skiplist) of the child node collection, you might be able to identify the required child node with as few as 5 search steps assuming case insensitive latin letters only. Interpolation sort might be helpful for large alphabets and nodes with many childs as those usually found near the root.

Daniel Brückner
Trie works great for finding matches at the beginning of a string. However, with my current dataset the process of removing the first char and then inserting didn't end up working very well, just started using way too much memory: > 1 gig before it was half done with the dataset.
aquinas
May be a case of premature optimization, when I just ran a naive "contains" search, the runtime is less than 100 milliseconds. Lucene also looks really cool, so I might try that for fun. Another idea would be to use a combination of trie and naive search.
aquinas
Start with trie and if you have less than 20 starts with matches, fall back to naive. Why can I only insert 300 characters??!
aquinas
+2  A: 

I would use something along the lines of a trie, and have the value of each leaf node be a list of the possibilities that contain the word represented by the leaf node. You could sort them in order of likelihood, or dynamically sort/filter them based on other words the user has entered into the search box, etc. It will execute very quickly and in a reasonable amount of RAM.

rmeador
A: 

You keep the items on the server side (perhaps in a DB, if the dataset is really large and complex) and you send AJAX calls from the client's browser that return the results using json/xml. You can do this in response to the user typing, or with a timer.

Assaf Lavie
A: 

Not algorithmically related to what you are asking, but make sure you have a 200ms or more delay (lag) after the kaypress(es) so you ensure that the user has stopped typing before issuing the asynchronous request. That way you will reduce redundant http requests to the server.

cherouvim
A: 

Don't try to implement this yourself (unless you're just curious). Use something like Lucene or Endeca - it will save you time and hair.

Jim Arnold
Lucene seems really cool, thanks for the suggestion! But, yeah, of COURSE I'm curious! :)
aquinas