views:

154

answers:

2

I was looking at the canonical bad fibonacci algorithm the other day:

public static int fib(int n)
{
    // Base Case
    if (n < 2)
    return 1;       
    else 
    return fib(n-1) + fib(n-2);     
}

I made the interesting observation. When you call fib(n), then for k between 1 and n fib(k) is called precisely fib(n-k+1) times (or fib(n-k) depending on your definition of fib(0) ). Also, fib(0) is called fib(n-k-1) times. This then allows me to find that in fib(100) there are exactly 708449696358523830149 calls to the fib function.

Are there other interesting observations on this function you know of?

Additional note: I know all the cool stuff about memoization etc... I am NOT asking on how to optimize it.

+2  A: 

This is a wonderful case for why memoization is a very useful optimization in functions such as Fibonacci.

As you can see, you can reduce the number of function calls from 2*fib(n)-1 to n - which is orders of magnitude better.

Mathematics-wise, the Fibonacci numbers have many other interesting properties.

Yuval A
+2  A: 

Adding to what Yuval has said...

As well as memoization, it's also worth mentioning the closely related "dynamic programming". Very closely related - personally, I'm not too clear on whether memoization is a special case of dynamic programming or not. Anyway, in this case, the standard iterative solution could be called a dynamic programming solution.

In the iterative solution, when you try to calculate fib(n), you have already calculated the partial solutions fib(n-2) and fib(n-1) so you just re-use those precalculated partial solutions. That it is done in a loop isn't important to dynamic programming. Memoization can be a special case (I'm just not sure whether it's always a special case according to strict definitions). The key point is that each partial solution is remembered, so that it only needs to be calculated once.

Steve314