views:

349

answers:

5

Hi, I don't understand, how FP compilers make the code dealing with immutable data structures fast, not blow up stack, etc.

For example, insert operation in tree, it has to copy the whole tree before adding the new node and return the copied tree, versus the imperative couterpart that only needs to add a pointer to the new node. If the insert operation is run millions times, it would take a load of memory, and copying will be slower and slower when the tree is bigger. How do FP compilers actually optimize this ?

A: 

Take a look at the Zipper data structure.

Todd Stout
+10  A: 

You don't have to copy the whole tree to make a change; you can share most of the structure. See e.g. the diagrams in this blog, or this talk by Rich Hickey on Clojure (see discussion of hash tries about halfway through).

Brian
+6  A: 

The compiler won't really optimize this, it is something you have to program for specifically when coding. Techniques for doing this are explained in the excellent Purely Functional Data Structures (book, thesis).

Adam Goode
A: 

Oh this is brilliant, thanks for great info.

romerun
A: 

If the garbage collector is doing its job, the older copies of the data structure will be reclaimed when they're no longer being used.

Barry Brown