Seems to me that you're close...
You're taking the min() of two terms, each of which is min[i - p] + 1
, where p is either 1 or some other square < i.
To fix this, just take the min() of min[i - p] + 1
over all p (where p is a square < i).
That would be a correct way. There may be a faster way.
Also, it might aid readability if you give min[]
and min()
different names. :-)
P.S. the above approach requires that you memoize min[]
, either explicitly, or as part of your DP framework. Otherwise, the complexity of the algorithm, due to recursion, would be something like O(sqrt(n)!) :-p though the average case might be a lot better.
P.P.S. See @Nikita's answer for a nice implementation. To which I would add the following optimizations... (I'm not nitpicking his implementation -- he presented it as a simple one.)
- Check whether n is a perfect square, before entering the outer loop: if so, min[n] = 1 and we're done.
- Check whether i is a perfect square before entering the inner loop: if so, min[i] = 1, and skip the inner loop.
- Break out of the inner loop if min[i] has been set to 2, because it won't get better (if it could be done with one square, we would never have entered the inner loop, thanks to the previous optimization).
- I wonder if the termination condition on the inner loop can be changed to reduce the number of iterations, e.g.
j*j*2 <= i
or even j*j*4 <= i
. I think so but I haven't got my head completely around it.
For large i, it would be faster to compute a limit for j before the inner loop, and compare j directly to it for the loop termination condition, rather than squaring j on every inner loop iteration. E.g.
float sqrti = Math.sqrt(i);
for (int j = 1; j <= sqrti; ++j) {
On the other hand, you need j^2 for the recursion step anyway, so as long as you store it, you might as well use it.