views:

1112

answers:

7

For a Data Structures project, I must find the shortest path between two words (like "cat" and "dog), changing only one letter at a time. We are given a Scrabble word list to use in finding our path. For example:

cat -> bat -> bet -> bot -> bog -> dog

I've solved the problem using a breadth first search, but am seeking something better (I represented the dictionary with a trie).

Please give me some ideas for a more efficient method (in terms of speed and memory). Something ridiculous and/or challenging is preferred.

Thanks!

Edit: I asked one of my friends (he's a junior) and he said that there is no efficient solution to this problem. He said I would learn why when I took the algorithms course. Any comments on that?

Edit: We must move from word to word. We cannot go cat -> dat -> dag -> dog. We also have to print out the traversal.

A: 

You could find the longest common subsequence, and therefore finding the letters that must be changed.

Calyth
+8  A: 

NEW ANSWER

Given the recent update, you could try A* with the Hamming distance as a heuristic. It's an admissible heuristic since it's not going to overestimate the distance

OLD ANSWER

You can modify the dynamic-program used to compute the Levenshtein distance to obtain the sequence of operations.

EDIT: If there are a constant number of strings, the problem is solvable in polynomial time. Else, it's NP-hard (it's all there in wikipedia) .. assuming your friend is talking about the problem being NP-hard.

EDIT: If your strings are of equal length, you can use Hamming distance.

Jacob
Given the example that should be Hamming distance.
Zed
You can't modify the Levenshtein function to do this, because you have a limited dictionary of valid words - and so the shortest valid path could be very much longer than the number of characters in the string.
Nick Johnson
^ My thoughts exactly.
dacman
Right, the word->word concept was only clear in the recent edit.
Jacob
Yes, sorry about that. What if I were to use a Branch and Bound algorithm, using the Hamming distance to bound?
dacman
I was thinking of A* with the Hamming distance as a heuristic. It's definitely admissible since it's not going to overestimate the distance.
Jacob
This depends on how you're measuring runtime. A* could get there faster - but in pathological cases, it could also take longer than the BFS (or dual BFS - see my answer) solution.
Nick Johnson
*shameless plug* I actually asked a question a month or two ago about this problem except in Python, and I ended up using A* with Hamming distance as the heuristic (although I didn't know it was called Hamming Distance at the time). I got some really good optimizations (some python specific, some not). The question is here http://stackoverflow.com/questions/788084/how-can-i-optimize-this-python-code and in the question updates I linked to a blog-post I made (about halfway down... there were a lot of updates to the question) where I described the optimizations along with the code.
Davy8
Oh good, so it works :)
Jacob
I did this and it worked wonderfully well. Amazed my professor too!
dacman
Good to know! Algorithms is always fun :)
Jacob
+3  A: 

This is a typical dynamic programming problem. Check for the Edit Distance problem.

Freddy
No it is not. Read the question carefully. There is a fixed given dictionary, so the edit distance has very little relevance.
ShreevatsaR
A: 

My gut feeling is that your friend is correct, in that there isn't a more efficient solution, but that is assumming you are reloading the dictionary every time. If you were to keep a running database of common transitions, then surely there would be a more efficient method for finding a solution, but you would need to generate the transitions beforehand, and discovering which transitions would be useful (since you can't generate them all!) is probably an art of its own.

Trey
+1  A: 

There are methods of varying efficiency for finding links - you can construct a complete graph for each word length, or you can construct a BK-Tree, for example, but your friend is right - BFS is the most efficient algorithm.

There is, however, a way to significantly improve your runtime: Instead of doing a single BFS from the source node, do two breadth first searches, starting at either end of the graph, and terminating when you find a common node in their frontier sets. The amount of work you have to do is roughly half what is required if you search from only one end.

Nick Johnson
+5  A: 

With a dictionary, BFS is optimal, but the running time needed is proportional to its size (V+E). With n letters, the dictionary might have ~a^n entires, where a is alphabet size. If the dictionary contains all words but the one that should be on the end of chain, then you'll traverse all possible words but won't find anything. This is graph traversal, but the size might be exponentially large.

You may wonder if it is possible to do it faster - to browse the structure "intelligently" and do it in polynomial time. The answer is, I think, no.

The problem:

You're given a fast (linear) way to check if a word is in dictionary, two words u, v and are to check if there's a sequence u -> a1 -> a2 -> ... -> an -> v.

is NP-hard.

Proof: Take some 3SAT instance, like

(p or q or not r) and (p or not q or r)

You'll start with 0 000 00 and are to check if it is possible to go to 2 222 22.

The first character will be "are we finished", three next bits will control p,q,r and two next will control clauses.

Allowed words are:

  • Anything that starts with 0 and contains only 0's and 1's
  • Anything that starts with 2 and is legal. This means that it consists of 0's and 1's (except that the first character is 2, all clauses bits are rightfully set according to variables bits, and they're set to 1 (so this shows that the formula is satisfable).
  • Anything that starts with at least two 2's and then is composed of 0's and 1's (regular expression: 222* (0+1)*, like 22221101 but not 2212001

To produce 2 222 22 from 0 000 00, you have to do it in this way:

(1) Flip appropriate bits - e.g. 0 100 111 in four steps. This requires finding a 3SAT solution.

(2) Change the first bit to 2: 2 100 111. Here you'll be verified this is indeed a 3SAT solution.

(3) Change 2 100 111 -> 2 200 111 -> 2 220 111 -> 2 222 111 -> 2 222 211 -> 2 222 221 -> 2 222 222.

These rules enforce that you can't cheat (check). Going to 2 222 22 is possible only if the formula is satisfable, and checking that is NP-hard. I feel it might be even harder (#P or FNP probably) but NP-hardness is enough for that purpose I think.

Edit: You might be interested in disjoint set data structure. This will take your dictionary and group words that can be reached from each other. You can also store a path from every vertex to root or some other vertex. This will give you a path, not neccessarily the shortest one.

sdcvvc
Great summary. If the original author is looking for something really creative, the edit distance can be used in conjunction with the word-reach graph as a fitness function for a genetic algorithm. The output being the path through the graph from one start word to the end word, so the best answer would be the shortest. (While cool, this would find the answer the fastest, but will not yield a definitive answer. Very TS.)I'd stick with the real world. Eliminate cycles, enumerate the paths and find the 'best' one using the above suggestions. This is tagged 'Java' so try JGraphT.
Joe Liversedge
Cool, not often does one see NP-hardness proofs in Stackoverflow answers. :-) I too suspect this problem is harder than NP (PSPACE-complete?) if the dictionary is given simply as a membership oracle... but if the dictionary is actually given in the input, then the problem can trivially be done in polynomial time, as the dictionary size is part of the input (that's the flaw in your NP-hardness proof).
ShreevatsaR
A: 

What you are looking for is called the Edit Distance. There are many different types.

From (http://en.wikipedia.org/wiki/Edit%5Fdistance): "In information theory and computer science, the edit distance between two strings of characters is the number of operations required to transform one of them into the other."

This article about Jazzy (the java spell check API) has a nice overview of these sorts of comparisons (it's a similar problem - providing suggested corrections) http://www.ibm.com/developerworks/java/library/j-jazzy/

Aidos