I don't know of any general method which will solve problems of the kind you stated. Perhaps the memoization technique used in Pibonacci (see second section below) can be used.
In any case, sometimes, we can give really fast algorithms by exploiting the problem (see the sqrt(2) & sqrt(3)) solution below.
Reducing such problems to knapsack might not be such a good idea as I expect there will be other ways which will be much faster.
So to answer your questions:
Problem involving sqrt(2) and sqrt(3)
I will answer your second question first.
f(x) = 1 for x < sqrt(2). (x >= 0 also, I presume)
f(x) = f(x-sqrt(2)) + sqrt(3)
This can be solved very fast (in O(log logn) time!), using only integer arithmetic (which is assumed O(1)), expect for one last step which requires multiplying by sqrt(3) and adding 1.
Given an n we need to find the smallest m such that
n - m sqrt(2) < sqrt(2)
i.e.
n - m sqrt(2) < sqrt(2) => n < (m+1)*sqrt(2) => n * sqrt(2) < m+1
and
n - (m-1)sqrt(2) > sqrt(2) => n > m sqrt(2) => n*sqrt(2) > m.
Thus m is the integer part of n*sqrt(2)
and we have that f(n) = (m-1)*sqrt(3) + 1.
Thus we only need to calculate [n *sqrt(2)]
the integer part of n*sqrt(2)
.
This can be quickly calculated by using the Continued Fractions of sqrt(2) which are rational approximations to sqrt(2) and they are in some sense the 'best' approximations with given denominator sizes.
The continued fraction a(i)/b(i) of sqrt(2) can be formed using the recurrence:
a0 = 1
b0 = 1
a(i+1) = a(i) +2*b(i)
b(i+1) = a(i) + b(i)
It can be shown that in order to approximate [n*sqrt(2)] it is enough to consider some odd i for which b(i) > 10*n^2 (using Liouville's approximation Theorem and theorems on continued fractions) and that [n*sqrt(2)] = [n*a(i)/b(i)]
for that i.
Now a(i), b(i) satisfies the matrix equation
[1 2] [a(i)] [a(i+1)]
[1 1] [b(i)] = [b(i+1)]
Thus we need to compute powers of the matrix
[1 2]
[1 1]
So that the entries get bigger than 10*n^2.
It can be shown that the required power of the matrix is O(logn) and thus can be calculated in O(log log n) time using only integer arithmetic (assuming that is O(1)).
So the value of your function f at n can be calculated in O(log logn) time using only integer arithmetic (except for the last step, where you need to multiply an integer by sqrt(3)).
Pibonacci Number
From your comment, this is the problem
g(x) = 1 if 0 <= x < 4
g(x) = g(x-1) + g(x-pi) x >= 4
This can be solved using memoization:
Let h(m,n) = g(m - n*pi)
Then we have that
h(m,n) = h(m-1, n) + h(m, n+1)
And so we have that
g(m) = g(m-1) + h(m, 1)
You can now use memoization by maintaining two tables, one for g(m) and other for h(m,n). Note that even though you need to calculate h(m,n+1)
, increasing n only reduces m -n*pi and will become between 0 and 4 within a reasonable time (O(m) I suppose), thus you won't keep going on forever.
This is not as nice (or fast) as the sqrt(2) and sqrt(3) solution, but I believe it does give a way to do the calculation.
0-1 Knapsack with irrational coefficients
Perhaps taking better and better rational approximations to the irrationals and then solving the 0-1 knapsack problem for approximation will ultimately converge to the right solution.
My guess is, the fixed point in this iteration will give you the solution.
Of course, as the approximations get better the W in O(nW) of they dynamic programming algorithm might become exponential soon and you might be better off just consider all possibilities.