views:

2183

answers:

13

I'm writing a function to find triangle numbers and the natural way to write it is recursively:

function triangle (x)
   if x == 0 then return 0 end
   return x+triangle(x-1)
end

But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.

+3  A: 
function memoize (f)
   local cache = {}
   return function (x)
             if cache[x] then
                return cache[x]
             else
                local y = f(x)
                cache[x] = y
                return y
             end
          end
end

triangle = memoize(triangle);

Note that to avoid a stack overflow, triangle would still need to be seeded.

Jon Ericson
An interesting (but useless) construction with a generic memoize function: calling memoize on memoize
Adam Rosenfield
@ Adam Rosenfield: Hmmm... funky!
Jon Ericson
Is that actually useless? If you memoize the same thing twice using this function, you get a brand new cache. If you memoize it using the memoization of this memoize function, you get back the same memoization of the original, with its cache already pre-primed. I think. My brain hurts.
Steve Jessop
Where by "twice", I mean chronologically - two different bits of code that each call M(f) get separate caches. If they call (M(M))(f) using the same instance of M(M), then they'd share an f-cache between them, without needing to know or care that it's the same function they've both memoized.
Steve Jessop
A: 

Here is a generic C# 3.0 implementation, if it could help :

public static class Memoization
{
    public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
    {
        var cache = new Dictionary<T, TResult>();
        var nullCache = default(TResult);
        var isNullCacheSet = false;
        return  parameter =>
                {
                    TResult value;

                    if (parameter == null && isNullCacheSet)
                    {
                        return nullCache;
                    }

                    if (parameter == null)
                    {
                        nullCache = function(parameter);
                        isNullCacheSet = true;
                        return nullCache;
                    }

                    if (cache.TryGetValue(parameter, out value))
                    {
                        return value;
                    }

                    value = function(parameter);
                    cache.Add(parameter, value);
                    return value;
                };
    }
}

(Quoted from a french blog article)

Romain Verdier
+2  A: 

In Scala (untested):

def memoize[A, B](f: (A)=>B) = {
  var cache = Map[A, B]()

  { x: A =>
    if (cache contains x) cache(x) else {
      val back = f(x)
      cache += (x -> back)

      back
    }
  }
}

Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:

val correctMem = memoize(memoize _)

I don't think that this will compile, but it does illustrate the idea.

Daniel Spiewak
Can I just say that you'd have saved me approximately 30 seconds confusion if you'd said "memoize(f) != memoize(f) for any function f" instead of "some function f"? I started thinking about fixed-point existence proofs, then realised you mean the exact same thing I did in my comments further up :-)
Steve Jessop
lol Good point, my statement isn't quite sufficient. I'll fix it.
Daniel Spiewak
To me at least, scala looks like some frankenstien monster of Python, c#, and c++.
RCIX
+4  A: 

You're also asking the wrong question for your original problem ;)

This is a better way for that case:

triangle(n) = n * (n - 1) / 2

Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.

Luke Halliwell
Playing around with the function it seemed obvious there would be a simpler algorithm. Thanks!
Jon Ericson
You have got to be kidding me.
Steve Jessop
@onebyone.livejournal.com: I'm sure when I solve the problem, the notes will reveal this mathematical solution. ;-)
Jon Ericson
+3  A: 

Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.

I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.

Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.

If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.

Chris Ammerman
I thought that was the reason for the questioner saying he was calculating the first 10,000 triangular numbers. It demonstrates (in a contrived way) that memoization can reduce/prevent recursion 'automatically' if terms of f are calculated in increasing order, because the small values are cached.
Steve Jessop
... of course the cache has to be big enough. A smarter memoization function might restrict the cache size, and that would still prevent recursion in this toy example. The point being that all this leads to Functional Language Optimization 101.
Steve Jessop
Actually, this function ought to be memoized even if tail recursion is in effect. To convince yourself, imagine calling it twice with two very large numbers. The second call will be much faster if the results of the first are cached.
Jon Ericson
+2  A: 

There's a scary-looking C++ preprocessor and library to do memoization as a recursion-optimization automatically in C++. That is, it will identify recursive functions and replace them with versions that do result caching, to get the same benefit that a good functional language would offer:

http://www.apl.jhu.edu/~paulmac/c++-memoization.html

Steve Jessop
A: 

Extending the idea, it's also possible to memoize functions with two input parameters:

function memoize2 (f)
   local cache = {}
   return function (x, y)
             if cache[x..','..y] then
                return cache[x..','..y]
             else
                local z = f(x,y)
                cache[x..','..y] = z
                return z
             end
          end
end

Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.

But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.

function gcd (a, b) 
   if b == 0 then return a end
   return gcd(b, a%b)
end

As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.

Jon Ericson
Couldn't you use a vararg in the closure returned by the memoize function? In Lua, you can do things like t = {...} to pack variable argument list into a table, or directly call a function and pass the f(...). Then just pack the vararg list to string to use as the cache index.
Lee Baldwin
NOTE: this will break if arguments contain ',' comma when converted to string. eg, f("1", "2,3") will evaluate same as f("1,2", "3"), even if that is the incorrect result.
Aaron
+3  A: 

I bet something like this should work with variable argument lists in Lua:

local function varg_tostring(...)
    local s = select(1, ...)
    for n = 2, select('#', ...) do
        s = s..","..select(n,...)
    end
    return s
end

local function memoize(f)
    local cache = {}
    return function (...)
        local al = varg_tostring(...)
        if cache[al] then
            return cache[al]
        else
            local y = f(...)
            cache[al] = y
            return y
        end
    end
end

You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.

Lee Baldwin
Good work! I haven't looked at variable argument list in Lua yet, so this is a great example.
Jon Ericson
Is there a way to convert args into a value more efficiently than converting to a string?
Aaron
NOTE: you need to escape ',' characters in the string 's' -- otherwise memoize of f("1", "2,3") will return the same value as f("1,2", "3"), even if the two functions return different results. Which would be bad.
Aaron
It could be done as a N dimension array, which would solve the comma issue, but the cache access might be less efficient. Mathematical and recursive functions are the best candidates for memoization, so I don't think these are huge issues.
Jon Ericson
you should add an option to make the cache a weak table (weak keys and values), so the cache can get cleaned once in a while, and avoid memory bloating
Robert Gould
A: 

In the vein of posting memoization in different languages, i'd like to respond to @onebyone.livejournal.com with a non-language-changing C++ example.

First, a memoizer for single arg functions:

template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
    template <class F>
    const Result& operator()(F f, const Arg& a){
        typename ResultStore::const_iterator it = memo_.find(a);
        if(it == memo_.end()) {
            it = memo_.insert(make_pair(a, f(a))).first;
        }
        return it->second;
    }
private:
    ResultStore memo_;
};

Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).

Next, a driver functon, and an implementation. only the driver function need be public int fib(int); // driver int fib_(int); // implementation

Implemented:

int fib_(int n){
    ++total_ops;
    if(n == 0 || n == 1) 
        return 1;
    else
        return fib(n-1) + fib(n-2);
}

And the driver, to memoize

int fib(int n) {
    static memoizer1<int,int> memo;
    return memo(fib_, n);
}

Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)

This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.

Aaron
+2  A: 

Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:

triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]

That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.

Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:

fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]

Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html

I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!

dreeves
+1  A: 

In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.

The example from it's manpage:

# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments);    # Is faster than it was before

You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.

Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!

Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html

Hercynium
+1  A: 

See this blog post for a generic Scala solution, up to 4 arguments.

thSoft
+1  A: 

Here's something that works without converting the arguments to strings. The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.

local function m(f)
  local t = { }
  local function mf(x, ...) -- memoized f
    assert(x ~= nil, 'nil passed to memoized function')
    if select('#', ...) > 0 then
      t[x] = t[x] or m(function(...) return f(x, ...) end)
      return t[x](...)
    else
      t[x] = t[x] or f(x)
      assert(t[x] ~= nil, 'memoized function returns nil')
      return t[x]
    end
  end
  return mf
end
Norman Ramsey