In the general case, where coin values can be arbitrary, the problem you are presenting is called the Knapsack Problem, and is known to belong to NP-complete (Pearson, D. 2004), so therefore is not solvable in polynomial time such as dynamic programming.
Take the pathological example of x[2] = 51, x[1] = 50, x[0] = 1, X = 100. Then it is required that the algorithm 'consider' the possibilities of making change with coin x[2], alternatively making change beginning with x[1]. The first-step used with national coinage, otherwise known as the Greedy Algorithm -- to wit, "use the largest coin less than the working total," will not work with pathological coinages. Instead, such algorithms experience a combinatoric explosion that qualifies them into NP-complete.
For certain special coin value arrangements, such as practically all those in actual use, and including the fictitious sytem X[i+1] == 2 * X[i], there are very fast algorithms, even O(1) in the fictitious case given, to determine the best output. These algorithms exploit properties of the coin values.
I am not aware of a dynamic programming solution: one which takes advantage of optimal sub-solutions as required by the programming motif. In general a problem can only be solved by dynamic programming if it can be decomposed into sub-problems which, when optimally solved, can be re-composed into a solution which is provably optimal. That is, if the programmer cannot mathematically demonstrate ("prove") that re-composing optimal sub-solutions of the problem results in an optimal solution, then dynamic programming cannot be applied.
An example commonly given of dynamic programming is an application to multiplying several matrices. Depending on the size of the matrices, the choice to evaluate A·B·C as either of the two equivalent forms: ((A·B)·C) or (A·(B·C)) leads to the computations of different quantities of multiplications and additions. That is, one method is more optimal (faster) than the other method. Dynamic programming is a motif which tabulates the computational costs of different methods, and performs the actual calculations according to a schedule (or program) computed dynamically at run-time.
A key feature is that computations are performed according to the computed schedule and not by an enumeration of all possible combinations -- whether the enumeration is performed recursively or iteratively. In the example of multiplying matrices, at each step, only the least-cost multiplication is chosen. As a result, the possible costs of intermediate-cost sub-optimal schedules are never calculated. In other words, the schedule is not calculated by searching all possible schedules for the optimal, but rather by incrementally building an optimal schedule from nothing.
The nomenclature 'dynamic programming' may be compared with 'linear programming' in which 'program' is also used in the sense meaning 'to schedule.'
To learn more about dynamic programming, consult the greatest book on algorithms yet known to me, "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein. "Rivest" is the 'R' of "RSA" and dynamic programming is but one chapter of scores.