As the comments to your question suggest, you should walk through the code in a debugger to get a good understanding of what's going if you can't follow the explanation in the book. But I'll give you a brief overview of what's happening:
What's being demonstrated is 'memoisation' which is a common optimisation technique used in functional programming. A function is said to be pure if the result depends only on the arguments passed into it. So, if a function is pure you can cache the result based on the arguments - this technique is called memoisation. You would do this if a function is expensive to calculate and is called multiple times.
The classical example used to demonstrate this (as here) is generating Fibonacci numbers. I'm not going to go through how those are worked out, but basically as you go to higher and higher numbers you repeat yourself more and more as each number is calculated from the preceeding two numbers. By memoising each intermediate result you only have to calculate them once hence making the algorithm much faster (much, much faster as you go higher up the sequence).
As far as this code is concerned the memoizer takes two parameters - 'memo' which is the cache. In this case it's going in with the first two values already filled in the '[0,1]' - these are the first two Fibonacci numbers.
The second parameter is the function to which the memoisation will be applied. In this case a recursive Fibonacci function:
function (shell, n) {
return shell(n - 1) + shell(n - 2);
}
i.e. the result is the sum of the previous two numbers in the sequence.
The memoizer first of checks to see if it already has a cached result. If it does it returns that immediately. If not it calculates the result and stores it in the cache. Without doing this it'd be repeating itself again and again and rapidly gets impossibly slow once to get to the higher numbers in the sequence.