views:

721

answers:

3

What's the complexity of a recursive program to find factorial of a number n? My hunch is that it might be O(n)

A: 

Yes, the standard recursive implementation of the factorial function is O(n). It's often used as an example to illustrate recursion, along with the Fibonacci function.

That said, you can also do some crazy stuff with recursion and make it perform a lot better or a lot worse.

polygenelubricants
+5  A: 

If you take multiplication as O(1), then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication, it's O(x ** 1.585)). (You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen, but I confess I have no real world experience with that one;-). x, the "length" or "number of digits" (in whatever base, doesn't matter for big-O anyway;-) of N, grows with O(log N), of course.

If you mean to limit your question to factorials of numbers short enough to be multiplied in O(1), then there's no way N can "tend to infinity" and therefore big-O notation is inappropriate.

Alex Martelli
You can only multiply numbers that fit in your memory. So I don't understand how a multiplication can overcome O(1). Can you give me a detailed explanation?
tur1ng
@tur1ng, you have big-O behavior for both requirement of time *and* extra space [[beyond the obvious space required for input and output which it would be absurd to count]] (though usually one means time unless explicitly mentioning space). Multiplication is O(1) in extra space and `O((log N)**1.585)` in time (with Karatsuba). The fact that the "physically reachable universe" (and therefore any actually conceivable machine) is finite is irrelevant to CS: analysis normally (implicitly) assumes a "Turing machine" which by definition has an infinitely long "tape" (infinite storage).
Alex Martelli
BTW @tur1ng, with that monicker you should definitely be more familiar with Turing machines and their infinitely long tape ("fit in your memory" indeed - pah!-) -- start at http://en.wikipedia.org/wiki/Turing_machine .
Alex Martelli
+1  A: 

Assuming you're talking about the most naive factorial algorithm ever:

factorial (n):
  if (n = 0) then return 1
  otherwise return n * factorial(n-1)

Yes, the algorithm is linear, running in O(n) time. This is the case because it executes once every time it decrements the value n, and it decrements the value n until it reaches 0, meaning the function is called recursively n times. This is assuming, of course, that both decrementation and multiplication are constant operations.

Of course, if you implement factorial some other way (for example, using addition recursively instead of multiplication), you can end up with a much more time-complex algorithm. I wouldn't advise using such an algorithm, though.

Welbog