tags:

views:

82

answers:

1

I have a few GBs worth of strings, and for every prefix I want to find 10 most common suffixes. Is there an efficient algorithm for that?

An obvious solution would be:

  • Store sorted list of <string, count> pairs.
  • Identify by binary search extent for prefix we're searching.
  • Find 10 highest counts in this extent.
  • Possibly precompute it for all short prefixes, so it doesn't ever need to look at large portion of data.

I'm not sure if that would actually be efficient at all. Is there a better way I overlooked?

Answers must be real time, but it can take as much preprocessing as necessary.

+5  A: 

Place the words in a tree e.g. trie or radix, placing a "number of occurrences" counter for each full word, so you know which nodes are endings and how common they are.

Find the prefix/postfix combos by iteration.

Both these operations are O(n*k) where k is the length of the longest word; this is the same complexity as a hash-table.

The HAT-trie is a cache-conscious version that promises high performance.

Will
+1 but I would suggest adding the characters from right to left to the trie.
Lieven
@Lieven: a trie may either be used as a prefix tree or a postfix tree.
Matthieu M.
@Matthieu: thanks, it seems I misunderstood tries.
Lieven