views:

1006

answers:

4

This is a really general question. What is dynamic programming (how's it different from recursion, memoization, etc)? I've read the wikipedia article on it but I still don't really understand it.

Help me 'get' dynamic programming. Thanks.

+12  A: 

Dynamic programming is when you use past knowledge to making solve a future problem easier.

A good example is solving the fibonacci sequence for n=1,000,002.

This will be a very long process, but what if I give you the results for n=1,000,000 and n=1,000,0001? Suddenly the problem just got a lot more manageable.

Dynamic programming is used a lot in string problems, such at the string edit problem. You solve a subset(s) of the problem and then use that information to solve the more difficult original problem.

With dynamic programming, you store your results in some sort of table generally. When you need the answer to a problem, you reference the table and see if you already know what it is. If not, you use the data in your table to give yourself a stepping stone towards the answer.

The Cormen Algorithms book has a great chapter about dynamic programming. AND it's free on Google Books! Check it out here.

samoz
Didn't you just describe memoization though?
byte
It's similar to memorization. However, you will also calculate and 'memorize' any intermediate data that you need.You will also ignore any data that won't contribute to an answer.
samoz
I would say memoization is a form of dynamic programming, when the memoized function/method is a recursive one.
Daniel Huckstep
Dynamic programming is an algorithm design technique, memoization is a programming optimization technique often used when implementing dynamic programming algorithms in code. They are not equivalent.
Nick Lewis
Good answer, would only add a mention about optimal sub-structure (e.g., every subset of any path along the shortest path from A to B is itself the shortest path between the 2 endpoints assuming a distance metric that observes the triangle inequality).
Arnshea
I wouldn't say "easier", but faster. A common misunderstanding is that dp solves problems that naive algorithms can't and that isn't the case. Is not a matter of functionality but of performance.
omgzor
Using memoization, dynamic programming problems can be solved in a top down manner. i.e. calling the function to calculate the final value, and that function in turn calls it self recursively to solve the subproblems. Without it, dynamic programming problems can only be solved in a bottom up way.
Pranav
+6  A: 

It's an optimization of your algorithm that cuts running time.

While a Greedy Algorithm is usually called naive, because it may run multiple times over the same set of data, Dynamic Programming avoids this pitfall through a deeper understanding of the partial results that must be stored to help build the final solution.

A simple example is traversing a tree or a graph only through the nodes that would contribute with the solution, or putting into a table the solutions that you've found so far so you can avoid traversing the same nodes over and over.

Here's an example of a problem that's suited for dynamic programming, from UVA's online judge: Edit Steps Ladder.

I'm going to make quick briefing of the important part of this problem's analysis, taken from the book Programming Challenges, I suggest you check it out.

Take a good look at that problem, if we define a cost function telling us how far appart two strings are, we have two consider the three natural types of changes:

Substitution - change a single character from pattern "s" to a different character in text "t", such as changing "shot" to "spot".

Insertion - insert a single character into pattern "s" to help it match text "t", such as changing "ago" to "agog".

Deletion - delete a single character from pattern "s" to help it match text "t", such as changing "hour" to "our".

When we set each of this operations to cost one step we define the edit distance between two strings. So how do we compute it?

We can define a recursive algorithm using the observation that the last character in the string must be either matched, substituted, inserted or deleted. Chopping off the characters in the last edit operation leaves a pair operation leaves a pair of smaller strings. Let i and j be the last character of the relevant prefix of and t, respectively. there are three pairs of shorter strings after the last operation, corresponding to the string after a match/substitution, insertion or deletion. If we knew the cost of editing the three pairs of smaller strings, we could decide which option leads to the best solution and choose that option accordingly. We can learn this cost, through the awesome thing that's recursion:

      #define MATCH 0 /* enumerated type symbol for match */
>     #define INSERT 1 /* enumerated type symbol for insert */
>     #define DELETE 2 /* enumerated type symbol for delete */
>     
> 
>     int string_compare(char *s, char *t, int i, int j)
>     
>     {
> 
>     int k; /* counter */
>     int opt[3]; /* cost of the three options */
>     int lowest_cost; /* lowest cost */
>     if (i == 0) return(j * indel(’ ’));
>     if (j == 0) return(i * indel(’ ’));
>     opt[MATCH] = string_compare(s,t,i-1,j-1) +
>       match(s[i],t[j]);
>     opt[INSERT] = string_compare(s,t,i,j-1) +
>       indel(t[j]);
>     opt[DELETE] = string_compare(s,t,i-1,j) +
>       indel(s[i]);
>     lowest_cost = opt[MATCH];
>     for (k=INSERT; k<=DELETE; k++)
>     if (opt[k] < lowest_cost) lowest_cost = opt[k];
>     return( lowest_cost );
> 
>     }

This algorithm is correct, but is also impossibly slow.

Running on our computer, it takes several seconds to compare two 11-character strings, and the computation disappears into never-never land on anything longer.

Why is the algorithm so slow? It takes exponential time because it recomputes values again and again and again. At every position in the string, the recursion branches three ways, meaning it grows at a rate of at least 3^n – indeed, even faster since most of the calls reduce only one of the two indices, not both of them.

So how can we make the algorithm practical? The important observation is that most of these recursive calls are computing things that have already been computed before. How do we know? Well, there can only be |s| · |t| possible unique recursive calls, since there are only that many distinct (i, j) pairs to serve as the parameters of recursive calls.

By storing the values for each of these (i, j) pairs in a table, we can avoid recomputing them and just look them up as needed.

The table is a two-dimensional matrix m where each of the |s|·|t| cells contains the cost of the optimal solution of this subproblem, as well as a parent pointer explaining how we got to this location:

    typedef struct {
    int cost; /* cost of reaching this cell */
    int parent; /* parent cell */
    } cell;

cell m[MAXLEN+1][MAXLEN+1]; /* dynamic programming table */

The dynamic programming version has three differences from the recursive version.

First, it gets its intermediate values using table lookup instead of recursive calls.

Second,it updates the parent field of each cell, which will enable us to reconstruct the edit sequence later.

Third,Third, it is instrumented using a more general goal cell() function instead of just returning m[|s|][|t|].cost. This will enable us to apply this routine to a wider class of problems.

Here, a very particular analysis of what it takes to gather the most optimal partial results, is what makes the solution a "dynamic" one.

Here's an alternate, full solution to the same problem. It's also a "dynamic" one even though its execution is different. I suggest you check out how efficient the solution is by submitting it to UVA's online judge. I find amazing how such a heavy problem was tackled so efficiently.

omgzor
+1  A: 

The key bits of dynamic programming are "overlapping sub-problems" and "optimal substructure". These properties of a problem mean that an optimal solution is composed of the optimal solutions to its sub-problems. For instance, shortest path problems exhibit optimal substructure. The shortest path from A to C is the shortest path from A to some node B followed by the shortest path from that node B to C.

In greater detail, to solve a shortest-path problem you will:

  • find the distances from the starting node to every node touching it (say from A to B and C)
  • find the distances from those nodes to the nodes touching them (from B to D and E, and from C to E and F)
  • we now know the shortest path from A to E: it is the shortest sum of A-x and x-E for some node x that we have visited (either B or C)
  • repeat this process until we reach the final destination node

Because we are working bottom-up, we already have solutions to the sub-problems when it comes time to use them, by memoizing them.

Remember, dynamic programming problems must have both overlapping sub-problems, and optimal substructure. Generating the Fibonacci sequence is not a dynamic programming problem; it utilizes memoization because it has overlapping sub-problems, but it does not have optimal substructure (because there is no optimization problem involved).

Nick Lewis
+4  A: 

Memoization is the when you store previous results of a function call (a real function always returns the same thing, given the same inputs). It doesn't make a difference for algorithmic complexity before the results are stored.

Recursion is the method of a function calling itself, usually with a smaller dataset. Since most recursive functions can be converted to similar iterative functions, this doesn't make a difference for algorithmic complexity either.

Dynamic programming is the process of solving easier-to-solve sub-problems and building up the answer from that. Most DP algorithms will be in the running times between a Greedy algorithm (if one exists) and an exponential (enumerate all possibilities and find the best one) algorithm.

  • DP algorithms could be implemented with recursion, but they don't have to be.
  • DP algorithms can't be sped up by memoization, since each sub-problem is only ever solved (or the "solve" function called) once.
philomathohollic
Very clearly put. I wish algorithm instructors could explain this well.
Kelly French