views:

1174

answers:

5

I'm reviewing some old notes from my algorithms course and the dynamic programming problems are seeming a bit tricky to me. I have a problem where we have an unlimited supply of coins, with some denominations x1, x2, ... xn and we want to make change for some value X. We are trying to design a dynamic program to decide whether change for X can be made or not (not minimizing the number of coins, or returning which coins, just true or false).

I've done some thinking about this problem, and I can see a recursive method of doing this where it's something like...

MakeChange(X, x[1..n this is the coins])
    for (int i = 1; i < n; i++)
    {
        if ( (X - x[i] ==0) || MakeChange(X - x[i]) )
            return true;
    }
    return false;

Converting this a dynamic program is not coming so easily to me. How might I approach this?

A: 

iIf you write in a recursive way, it is fine, just use memory based search. you have to store what you have calculated, which will not be calculated again

int memory[#(coins)]; //initialize it to be -1, which means hasn't been calculated
MakeChange(X, x[1..n this is the coins], i){
    if(memory[i]!=-1) return memory[i];
    for (int i = 1; i < n; i++)
    {
        if ( (X - x[i] ==0) || MakeChange(X - x[i], i) ){
            memory[i]=true;
            return true;
        }
    }
    return false;
}
sza
+1  A: 

Just add a memoization step to the recursive solution, and the dynamic algorithm falls right out of it. The following example is in Python:

cache = {}
def makeChange(amount, coins):
    if (amount,coins) in cache:
        return cache[amount, coins]
    if amount == 0:
        ret = True
    elif not coins:
        ret = False
    elif amount < 0:
        ret = False 
    else:
        ret = makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])
    cache[amount, coins] = ret
    return ret

Of course, you could use a decorator to auto-memoize, leading to more natural code:

def memoize(f):
    cache = {}
    def ret(*args):
        if args not in cache:
            cache[args] = f(*args)
        return cache[args]
    return ret
@memoize
def makeChange(amount, coins):
    if amount == 0:
        return True
    elif not coins:
        return False
    elif amount < 0:
        return False
    return makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])

Note: even the non-dynamic-programming version you posted had all kinds of edge cases bugs, which is why the makeChange above is slightly longer than yours.

moshez
It still has a bug: If you pass coins as a list instead of a tuple, you'll get a "TypeError: unhashable type".
dan04
That's not a bug -- it's a feature (if you pass "amount" as string, you'll also get a an exception). Getting type errors when you pass in incorrect types is a feature, not a bug.
moshez
+3  A: 

Your code is a good start. The usual way to convert a recursive solution to a dynamic-programming one is to do it "bottom-up" instead of "top-down". That is, if your recursive solution calculates something for a particular X using values for smaller x, then instead calculate the same thing starting at smaller x, and put it in a table.

In your case, change your MakeChange recursive function into a canMakeChange table.

canMakeChange[0] = True
for X = 1 to (your max value):
   canMakeChange[X] = False
   for i=1 to n:
     if X>=x[i] and canMakeChange[X-x[i]]==True: 
       canMakeChange[X]=True
ShreevatsaR
The bottom up approach is definitely hallmark to dynamic programming and what I was trying to flip over from. Thanks!
Tony
the algorithm provided only returns true if change can be made exactly using one coin... As can be understood by noticing that only elements [0] and [X] of canMakeChange can ever be assigned the value True.
Heath Hunnicutt
@Heath not true. the algorithm above initially returns true for only 0. Then when one of the coins (X - xi) = 9, it will return true. These will build up from the bottom up. That's what dynamic programming is all about
Tony
Ok, I see what you mean. But this is not dynamic programming, either. The related problem "what coins do you return as change" cannot be solved this way. I had thought you wanted to solve that problem, too. It is a simpler problem for any coinage which contains x[n]=1; return true.
Heath Hunnicutt
What makes this not dynamic programming? When you do the class matrix multiplication problem, many of the times you start with just returning what the minimum number of multiplications is. That is still dynamic programming in the truest form, no?
Tony
Specific quality which, imo, makes this approach not dynamic programming: that all intermediate results are calculated, rather than only a subset formed by pursuing the 'path' of optimal sub-results; because there is in fact no concept of an 'optimal' sub-result in this implementation. For example, if change can be made with either four quarters, ten dimes, twenty nickels, or hundred pennies, all those possibilities will have been calculated before this algorithm returns.
Heath Hunnicutt
+1  A: 

In the general case, where coin values can be arbitrary, the problem you are presenting is called the Knapsack Problem, and is known to belong to NP-complete (Pearson, D. 2004), so therefore is not solvable in polynomial time such as dynamic programming.

Take the pathological example of x[2] = 51, x[1] = 50, x[0] = 1, X = 100. Then it is required that the algorithm 'consider' the possibilities of making change with coin x[2], alternatively making change beginning with x[1]. The first-step used with national coinage, otherwise known as the Greedy Algorithm -- to wit, "use the largest coin less than the working total," will not work with pathological coinages. Instead, such algorithms experience a combinatoric explosion that qualifies them into NP-complete.

For certain special coin value arrangements, such as practically all those in actual use, and including the fictitious sytem X[i+1] == 2 * X[i], there are very fast algorithms, even O(1) in the fictitious case given, to determine the best output. These algorithms exploit properties of the coin values.

I am not aware of a dynamic programming solution: one which takes advantage of optimal sub-solutions as required by the programming motif. In general a problem can only be solved by dynamic programming if it can be decomposed into sub-problems which, when optimally solved, can be re-composed into a solution which is provably optimal. That is, if the programmer cannot mathematically demonstrate ("prove") that re-composing optimal sub-solutions of the problem results in an optimal solution, then dynamic programming cannot be applied.

An example commonly given of dynamic programming is an application to multiplying several matrices. Depending on the size of the matrices, the choice to evaluate A·B·C as either of the two equivalent forms: ((A·BC) or (A·(B·C)) leads to the computations of different quantities of multiplications and additions. That is, one method is more optimal (faster) than the other method. Dynamic programming is a motif which tabulates the computational costs of different methods, and performs the actual calculations according to a schedule (or program) computed dynamically at run-time.

A key feature is that computations are performed according to the computed schedule and not by an enumeration of all possible combinations -- whether the enumeration is performed recursively or iteratively. In the example of multiplying matrices, at each step, only the least-cost multiplication is chosen. As a result, the possible costs of intermediate-cost sub-optimal schedules are never calculated. In other words, the schedule is not calculated by searching all possible schedules for the optimal, but rather by incrementally building an optimal schedule from nothing.

The nomenclature 'dynamic programming' may be compared with 'linear programming' in which 'program' is also used in the sense meaning 'to schedule.'

To learn more about dynamic programming, consult the greatest book on algorithms yet known to me, "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein. "Rivest" is the 'R' of "RSA" and dynamic programming is but one chapter of scores.

Cover of the recommended book.

Heath Hunnicutt
A lot of what you said is very useful and true. However, the act of deciding whether change can be made definitely holds optimal substructure. If I can make break my denomination X down into factors, then the solution of those factors put together will yield my solution. Yes, the knapsack problem is NP Complete, but this is a little more specific, no?
Tony
Real-life change-making is only more specific than the knapsack problem because national currencies (U.S. coins, for example) have special properties. Those properties might let you use dynamic programming, but all the examples seen here (including your original) are actually greedy algorithm if recognizable. Consider my example with 51, 50, and 1 "cent" pieces... It is a pathological coinage which can be worsened with the addition of a 49-cent piece,a 48-cent piece, and so on. These nonsense coinages, as a set, give rise to the general problem.
Heath Hunnicutt
Real-life changing making put aside, there is optimal substructure in the sense that these are integers which may have factors. If those factors factor down into factors that you have in your sets of coins, then you will have a solution, otherwise you do not. If I have a solution for finding a denomination of coins that gives me 13, and I have a solution that gives me 7, then I have a solution that gives me 20.
Tony
Factors would be for multiplication, right? This is subtraction, when we make change, I think. If I owe you a dollar, and I pay you $0.25, do I now owe you $0.75 (subtraction), or do I now owe you the dimensionless number "4" (division)? I am thinking we want subtraction, not factoring.
Heath Hunnicutt
You didn't dig in to my example, so how does your program know how to make change for 100 if these are valid choices: 51 + 1 + 1 + ... (49 times); 50 + 50? But yet if the input is 51, we do want to chose the coin with value 51. We must examine all the possibilities, or at least many of them.
Heath Hunnicutt
Sorry, I was careless in my wording. Both apply. .25 is a factor of 1 dollar. You give me four of that denomination and you have the dollar. Either way, there is most definitely optimal substructure, you can agree yes?
Tony
By the way, this is an interesting discussion. Thanks in advance for having it with me
Tony
Making change with US currency is a special case where there is an optimal substructure, I agree. But it's not the same thing as what is meant in dynamic programming. For example, with US currency, as opposed to choosing which matrices to multiple first, you don't have to consider whether it would be optimal at each step to use any coins other than the largest denomination that is less than the amount owed. This is the greedy algorithm, which is not normally viewed as a special case of dynamic programming. In the greedy algorithm, you don't have to consider alternatives at each step.
Heath Hunnicutt
Dynamic programming in all senses is using the properties of overlapping subproblems and optimal substructure to your advantage to solve a problem with less work. In my original code in the original post, that recursion would solve the same problem SEVERAL times over to achieve a solution. Meanwhile, in a bottom-up approach, we solve it once
Tony
For example, you don't need to calculate that your best choice for changing a dollar is the quarter. (Unless you include the half-dollar or better yet the gold dollar coin! -- Let's say US == 25, 10, 5, 1 cent coins only) In dynamic programming, you would need to calculate some metric for each possible choice, and then choose the optimal. I'm not sure what that metric would be in this case. In greedy, you simply always choose the biggest coin. In my fictional case, where the only coins are 51, 50, and 1, the greedy choice does not work for multiples of 50. The US coins were designed.
Heath Hunnicutt
If "or" evaluates a short-circuit, your original implementation was "greedy algorithm" I thought.
Heath Hunnicutt
Meanwhile, it adheres to optimal substructure by using a prefix solution + a coin denomination to return whether the new value is possible or not. I am using each X - each possible denomination to check for a solution, rather than computing from scratch. If I have a denomination of 0.037 and am looking for a solution to 20 dollars, I will see if my solution for 20 - 0.037 yielded true
Tony
As long as your original code was initialized with x[1]=25, x[2]=10, x[3]=5, x[4]=1, that is the Greedy Algorithm, which is a special case of exhaustive search. But in the case of well-designed money, Greedy Algorithm is actually an optimal search which always completes with failed recursion. Greedy is usually given in the coursework immediately prior to dynamic programming.
Heath Hunnicutt
the "or" in my original implementation yields your base cases. You start with 0 being true. A denomination minus itself will yield 0, so these will also yield true. Besides that case, the first clause will never uphold.
Tony
Give me an example, for X > 25, where your program will "backtrack" after the first recursion, increment i, and try again with the dime. There is no example, for US coins, because US coins are designed for the algorithm you provide.
Heath Hunnicutt
Look to ShreevatsaR's solution. The solution for, say X = 96 will simply check the solution for 96 - (every coin we have here), and if these yield true, then the solution is true, since we can use the solution for whatever the value of (96 - said denomination) and just tack on our coin.
Tony
Yes, ShreevatsaR's solution succeeds for all possible types of coins... I wouldn't disagree with that. Is it 'dynamic programming'? - another question. For US coins, your code is much faster than his. For arbitrary coins, there is probably a more optimal solution than given here. That may be why people were publishing papers on the subject as recently as 2004.
Heath Hunnicutt
Intriguing thoughts. I'd call his solution dynamic programming, personally. By all accords of my personal knowledge, it is. However, I've still got tons to learn. Thanks for this discussion. It's really got me thinking about DP
Tony
A: 

This paper is very relevant: http://ecommons.library.cornell.edu/handle/1813/6219

Basically, as others have said, making optimal change totaling an arbitrary X with arbitrary denomination sets is NP-Hard, meaning dynamic programming won't yield a timely algorithm. This paper proposes a polynomial-time (that is, polynomial in the size of the input, which is an improvement upon previous algorithms) algorithm for determining if the greedy algorithm always produces optimal results for a given set of denominations.

baddox