views:

134

answers:

3

Suppose you have a dictionary that contains valid words.

Given an input string with all spaces removed, determine whether the string is composed of valid words or not.

You can assume the dictionary is a hashtable that provides O(1) lookup.

Some examples:

helloworld-> hello world (valid)
isitniceinhere-> is it nice in here (valid)
zxyy-> invalid

If a string has multiple possible parsings, just return true is sufficient.

The string can be very long. Hence think an algorithm that is both space & time efficient.

A: 

I'd go for a recursive algorithm with implicit backtracking. Function signature: f: input -> result, with input being the string, result either true or false depending if the entire string can be tokenized correctly.

Works like this:

  1. If input is the empty string, return true.
  2. Look at the length-one prefix of input (i.e., the first character). If it is in the dictionary, run f on the suffix of input. If that returns true, return true as well.
  3. If the length-one prefix from the previous step is not in the dictionary, or the invocation of f in the previous step returned false, make the prefix longer by one and repeat at step 2. If the prefix cannot be made any longer (already at the end of the string), return false.
  4. Rinse and repeat.

For dictionaries with low to moderate amount of ambiguous prefixes, this should fetch a pretty good running time in practice (O(n) in the average case, I'd say), though in theory, pathological cases with O(2^n) complexity can probably be constructed. However, I doubt we can do any better since we need backtracking anyways, so the "instinctive" O(n) approach using a conventional pre-computed lexer is out of the question. ...I think.

EDIT: the estimate for the average-case complexity is likely incorrect, see my comment.

Space complexity would be only stack space, so O(n) even in the worst-case.

ig2r
Can you clarify on the claim "average case O(n)"?
SiLent SoNG
Hm, come to think about it, that average O(n) may have been a misjudgment on my part. For instance, if the algorithm sees the prefix 'a' in the dictionary (assuming a dictionary of English words for a minute), it would first try to tokenize the remaining input... likely adding the *entire* suffix character-by-character before deciding that this cannot be tokenized and reverting to expanding the 'a'. Looks more like something polynomial, then. Considering Cocke-Younger-Kasami has something like O(n^3), too... good question, then.
ig2r
CYK parsing algorithm. You remind me of my NLP course. :P
SiLent SoNG
In worst case, the algorithm will attempt each possible text span to justify whether it is a valid word or not. There are n^2 possible text spans, hence worst case O(n^2).
SiLent SoNG
The worst case is O(2^N), as we have T(0) = O(1), T(N) = sum(T[i], 0..N-1).
Nabb
@Nabb: An careful implementation will not go exponential. The algorithm keeps on asking the 2nd item: is the remaining substring satisfies the property? If previously we have attempted this substring, don't compute it again. There are totally n^2 substrings, and hence in worst case O(n^2).
SiLent SoNG
@SiLent SoNG: "If previously we have attempted this substring, don't compute it again." - this solution is not doing this. You can add this by keeping a cache of previously attempted substrings. This is basically memoization and is actually the same as the Dynamic Programming solution suggested by @Falk Hüffner. Only a DP implementation would be do away with the recursion.
MAK
+2  A: 

This can be done in quadratic time by dynamic programming, see here.

Falk Hüffner
This is optimal for the dictionary data structure given as we could have substrings [0..i] for i=1..N/2-1 match, as well as substrings [i..N-1] for i=N/2..N-2 match.
Nabb
Ah, interesting. I assumed that there might be a way to attain better worst-case bounds than my straightforward recursion approach by avoiding recomputations somehow (given the existence of CYK and all), but couldn't quite figure out how to do it.
ig2r
+1  A: 

I think the set of all strings that occur as the concatenation of valid words (words taken from a finite dictionary) form a regular language over the alphabet of characters. You can then build a finite automaton that accepts exactly the strings you want; computation time is O(n).

For instance, let the dictionary consist of the words {bat, bag}. Then we construct the following automaton: states are denoted by 0, 1, 2. Edges: (0,1,b), (1,2,a), (2,0,t), (2,0,g); where the triple (x,y,z) means an edge leading from x to y on input z. The only accepting state is 0. In each step, on reading the next input sign, you have to calculate the set of states that are reachable on that input. Given that the number of states in the automaton is constant, this is of complexity O(n). As for space complexity, I think you can do with O(number of words) with the hint for construction above.

For an other example, with the words {bag, bat, bun, but} the automaton would look like this: alt text

Supposing that the automaton has already been built (the time to do this has something to do with the length and number of words :-) we now argue that the time to decide whether a string is accepted by the automaton is O(n) where n is the length of the input string. More formally, our algorithm is as follows:

  1. Let S be a set of states, initially containing the starting state.
  2. Read the next input character, let us denote it by a.
  3. For each element s in S, determine the state that we move into from s on reading a; that is, the state r such that with the notation above (s,r,a) is an edge. Let us denote the set of these states by R. That is, R = {r | s in S, (s,r,a) is an edge}.
  4. (If R is empty, the string is not accepted and the algorithm halts.)
  5. If there are no more input symbols, check whether any of the accepting states is in R. (In our case, there is only one accepting state, the starting state.) If so, the string is accepted, if not, the string is not accepted.
  6. Otherwise, take S := R and go to 2.

Now, there are as many executions of this cycle as there are input symbols. The only thing we have to examine is that steps 3 and 5 take constant time. Given that the size of S and R is not greater than the number of states in the automaton, which is constant and that we can store edges in a way such that lookup time is constant, this follows. (Note that we of course lose multiple 'parsings', but that was not a requirement either.) I think this is actually called the membership problem for regular languages, but I couldn't find a proper online reference.

sandris
Say if we have n valid words. How many possible automatons can be there by chaining words up (either different or the same)? I guess it's O(n!). The space complexity is unacceptable.
SiLent SoNG
There's no need to 'chain up words', I'll try to edit my answer :-)
sandris
@sandris: chain of chars. nice. Can you justify the O(n) time cost? To find a complete path in the graph in O(n).
SiLent SoNG
@SiLent SoNG: I added some explanation to my answer.
sandris