I recently started learning Python and I was rather surprised to find a 1000 deep recursion limit (by default). If you set it high enough, about 30000, it crashes with a segmentation fault just like C. Although, C seems to go quite a lot higher.
(The Python folks are quick to point out that you can always convert recursive functions to iterative ones and that they're always faster. That's 100% true. It's not really what my question is about though.)
I tried the same experiment in Perl and somewhere around 10 million recursions it consumed all of my 4 gigs of ram and I used ^C to stop trying. Clearly Perl doesn't use the C stack, but it does use a ridiculous amount of memory when it recurses -- not terribly shocking considering how much work it has to do to call functions.
I tried in Pike and was completely surprised to get 100,000,000 recursions in about 2 seconds. I have no idea how it did that, but I suspect it flattened the recursion to an iterative process -- it doesn't seem to consume any extra memory while it does it. [Note: Pike does flatten trivial cases, but segfaults on more complicated ones, or so I'm told.]
I used these otherwise useless functions:
int f(int i, int l) { if(i<l) return f(i+1,l); return i; }
sub f { return f($_[0]+1, $_[1]) if $_[0]<$_[1]; return $_[0] };
def f(i,l):
if i<l:
return f(i+1,l)
return i
I'm very curious how other languages (e.g., PHP, Ruby, Java, Lua, Ocaml, Haskell) handle recursion and why they handle it that way. Additionally, please note whether it makes a difference if the function is "tail-recursive" (see comment).