views:

249

answers:

3

I like Clojure. One thing that bothers me about the language is that I don't know how lazy sequences are implemented, or how they work.

I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?

What makes lazy sequences so efficient that they don't consume much stack?

How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?

What resources do lazy sequences consume to do what it does?

In what scenarios are lazy sequences inefficient?

In what scenarios are lazy sequences most efficient?

+5  A: 

Originally, lazy sequences in Clojure were evaluated item-by-item as they were needed. Chunked sequences were added in Clojure 1.1 to improve performance. Instead of item-by-item evaluation, "chunks" of 32 elements are evaluated at a time. This reduces the overhead that lazy evaluation incurs. Also, it allows clojure to take advantage of the underlying data structures. For example, PersistentVector is implemented as a tree of 32 element arrays. This means that to access an element, you must traverse the tree until the appropriate array is found. With chunked sequences, entire arrays are grabbed at a time. This means each of the 32 elements can be retrieved before the tree needs to be re-traversed.

There has been discussion about providing a way to force item-by-item evaluation in situations where full laziness is required. However, I don't think it has been added to the language yet.

How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?

Do you have an example of what you mean? If you have a recursive binding to a lazy-seq, it can definitely cause a stack overflow.

dbyrne
Check out my answer for a recursive lazy sequence.
Isaac Hodes
I think I was a bit unclear. I understand recursive lazy sequences, but was unsure why mudge was worried they would cause a stack overflow. Laziness by its very nature keeps a stack overflow from occurring.
dbyrne
Ah got it–makes sense. Apologies.
Isaac Hodes
No worries...nice answer.
dbyrne
Helpful. Thank you.
mudge
+10  A: 

Let's do this.

I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?

Lazy sequences (henceforth LS, because I am a LP, or Lazy Person) are composed of parts. The head, or the part(s, as really 32 elements are evaluated at a time, as of Clojure 1.1, and I think 1.2) of the sequence that have been evaluated, is followed by something called a thunk, which is basically a chunk of information (think of it as the rest of the your function that creates the sequence, unevaluated) waiting to be called. When it is called, the thunk evaluates however much is asked of it, and a new thunk is created, with context as necessary (how much has been called already, so it can resume from where it was before).

So you (take 10 (whole-numbers)) – assume whole-numbers is a lazy sequence of whole numbers. That means you're forcing evaluation of thunks 10 times (though internally this may be a little difference depending on optimizations.

What makes lazy sequences so efficient that they don't consume much stack?

This becomes clearer once you read the previous answer (I hope): unless you call for something in particular, nothing is evaluated. When you call for something, each element of the sequence can be evaluated individually, then discarded.

If the sequence is not lazy, oftentimes it is holding onto its head, which consumes heap space. If it is lazy, it is computed, then discarded, as it is not required for subsequent computations.

How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?

See the previous answer and consider: the lazy-seq macro (from the documentation) will

will invoke the body only the first time seq
is called, and will cache the result and return it on all subsequent
seq calls.

Check out the filter function for a cool LS that uses recursion:

(defn filter
  "Returns a lazy sequence of the items in coll for which
  (pred item) returns true. pred must be free of side-effects."
  [pred coll]
  (let [step (fn [p c]
                 (when-let [s (seq c)]
                   (if (p (first s))
                     (cons (first s) (filter p (rest s)))
                     (recur p (rest s)))))]
    (lazy-seq (step pred coll))))

What resources do lazy sequences consume to do what it does?

I'm not quite sure what you're asking here. LSs require memory and CPU cycles. They just don't keep banging the stack, and filling it up with results of the computations required to get the sequence elements.

In what scenarios are lazy sequences inefficient?

When you're using small seqs that are fast to compute and won't be used much, making it an LS is inefficient because it requires another couple chars to create.

In all seriousness, unless you're trying to make something extremely performant, LSs are the way to go.

In what scenarios are lazy sequences most efficient?

When you're dealing with seqs that are huge and you're only using bits and pieces of them, that is when you get the most benefit from using them.

Really, it's pretty much always better to use LSs over non-LSs, in terms of convenience, ease of understanding (once you get the hang of them) and reasoning about your code, and speed.

Isaac Hodes
Holding onto the head doesn't consume stack. It consumes heap.
Michał Marczyk
I don't think I said it used stack. I am well aware of it consuming heap space (having just run into that error an hour ago)!
Isaac Hodes
Quoting from your answer above: "If the sequence is not lazy, oftentimes it is holding onto its head, which consumes stack space." (See the "What makes lazy sequences so efficient that they don't consume much stack" bullet.)
Michał Marczyk
As far as I know, I'm correct about that. See http://pastie.org/1044483 for what I just did to test this.
Isaac Hodes
Also, for the OP, compare that, which gives a StackOverflow, to this (http://pastie.org/1044488), which (yeah, it's not anything interesting) doesn't, for the same input.
Isaac Hodes
"Holding onto the head" refers to holding onto a *lazy* seq's head; your first paste yields an SO of course (because of a growing control context), but you wouldn't normally say that it "holds onto its head". Hence my comment. Also, for the second paste: you shouldn't mix tail recursion and `lazy-seq`. It's not that important here -- the `lazy-seq` simply adds no value, but doesn't really harm anything either -- but generally it can lead to problems (or rather, to `lazy-seq` not realising its potential). See my answer for details (I posted it primarily to discuss this issue).
Michał Marczyk
Ah fair enough; thanks for the clarification. My terminology isn't quite up to speed yet. As for the superfluous `lazy-seq`, I shouldn't have posted that as an example, as, as you say, it doesn't do anything. Once again, thanks for the comments!
Isaac Hodes
One more comment re: the first issue: any strict sequence producer will "hold onto the head" of the produced sequence, because it needs to produce it all at once. That's probably the reason why this expression is not usually used in the context of strict sequences (because it adds no information) and the reason why I misunderstood your intention.
Michał Marczyk
(Hm, posted the comment before finishing it... continuing from above:) With the confusion cleared up, I do of course agree with the gist of what you wanted to say.
Michał Marczyk
Absolutely agreed; sorry for the confusion. Here's a better (http://pastie.org/1044535) example for showing `lazy-seq` ing that actually adds to the function. I've been doing too much tail-recursion lately… Thanks again, Michal.
Isaac Hodes
Very helpful. Thank you.
mudge
+7  A: 

I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?

I think the previously posted answers already do a good job explaining this part. I'll only add that the "forcing" of a lazy sequence is an implicit -- paren-free! :-) -- function call; perhaps this way of thinking about it will make some things clearer. Also note that forcing a lazy sequence involves a hidden mutation -- the thunk being forced needs to produce a value, store it in a cache (mutation!) and throw away its executable code, which will not be required again (mutation again!).

I know that lazy sequences only evaluate the items in the sequence that are asked for, how does it do this?

What makes lazy sequences so efficient that they don't consume much stack?

What resources do lazy sequences consume to do what it does?

They don't consume stack, because they consume heap instead. A lazy sequence is a data structure, living on the heap, which contains a small bit of executable code which can be called to produce more of the data structure if/when that is required.

How come you can wrap recursive calls in a lazy sequence and no longer get a stack over flow for large computations?

Firstly, as mentioned by dbyrne, you can very well get an SO when working with lazy sequences if the thunks themselves need to execute code with a very deeply nested call structure.

However, in a certain sense you can use lazy seqs in place of tail recursion, and to the degree that this works for you you can say that they help in avoiding SOs. In fact, rather importantly, functions producing lazy sequences should not be tail recursive; the conservation of stack space with lazy seq producers arises from the aforementioned stack -> heap transfer and any attempts to write them in a tail recursive fashion will only break things.

The key insight is that a lazy sequence is an object which, when first created, doesn't hold any items (as a strict sequence always does); when a function returns a lazy sequence, only this "lazy sequence object" is returned to the caller, before any forcing takes place. Thus the stack frame used up by the call which returned the lazy sequence is popped before any forcing takes place. Let's have a look at an example producer function:

(defn foo-producer [] ; not tail recursive...
  (lazy-seq
    (cons :foo        ; because it returns the value of the cons call...
           (foo-producer)))) ; which wraps a non-tail self-call

This works because lazy-seq returns immediately, thus (cons :foo (foo-producer)) also returns immediately and the stack frame used up by the outer call to foo-producer is immediately popped. The inner call to foo-producer is hidden in the rest part of the sequence, which is a thunk; if/when that thunk is forced, it will briefly use up its own frame on the stack, but then return immediately as described above etc.

Chunking (mentioned by dbyrne) changes this picture very slightly, because a larger number of elements gets produced at each step, but the principle remains the same: each step used up some stack when the corresponding elements of the lazy seq are being produced, then that stack is reclaimed before more forcing takes place.

In what scenarios are lazy sequences inefficient?

In what scenarios are lazy sequences most efficient?

There's no point to being lazy if you need to hold the entire thing at once anyway. A lazy sequence makes a heap allocation at every step when not chunked or at every chunk -- once every 32 steps -- when chunked; avoiding that can net you a performance gain in some situations.

However, lazy sequences enable a pipelined mode of data processing:

(->> (lazy-seq-producer)               ; possibly (->> (range)
     (a-lazy-seq-transformer-function) ;               (filter even?)
     (another-transformer-function))   ;               (map inc))

Doing this in a strict way would allocate plenty of heap anyway, because you'd have to keep the intermediate results around to pass them to the next processing stage. Moreover, you'd need to keep the whole thing around, which is actually impossible in the case of (range) -- an infinite sequence! -- and when it is possible, it is usually inefficient.

Michał Marczyk
Very helpful, thank you.
mudge