views:

9615

answers:

6

Does anyone know what is the worst possible asymptotic slowdown that can happen when programming purely functionally as opposed to imperatively (i.e. allowing side-effects)?

Clarification from comment by itowlson: is there any problem for which the best known non-destructive algorithm is asymptotically worse than the best known destructive algorithm, and if so by how much?

A: 

With the exception of microbenches i've never seen anything coming with less then 50-100% overhead compared to imperative languages. OCAML seems the best in practical performance.

Lothar
WHAT? 100% overhead? That's twice as slow. I would challenge those numbers everyday.
Martinho Fernandes
Perhaps you *should* challenge them then. Show us some data that suggests otherwise rather than simply saying it's wrong. ;)
jalf
The question did not ask about constant factors. It asked about asymptotic effects of using purely functional algorithms and data structures as opposed to mutable data structures.
Brian Campbell
@Martinho: On an application level of a serious real world program? I want to see this. With 100% overhead (remember that a lot of the real world stuff is just not functional problems). And yes i know he didn't ask for constant factors but answering the question in other ways would be pretty stupid as you can't get an generic asymptotical performance overview for agorithms.
Lothar
Ok, I might have been a little too rash in my comment, but those numbers just screamed "badly written code" to me.
Martinho Fernandes
@Martinho Fernandes: Try writing an efficient sort in Haskell, for example. Haskell's built-in sorts are hundreds of times slower than other languages and their elegant quicksort is thousands of times slower. Without in-place mutation you end up copying everything multiple times which destroys the scalability of parallelism on a multicore.
Jon Harrop
+15  A: 

This article claims that the known purely functional implementations of the union-find algorithm all have worse asymptotic complexity than the one they publish, which has a purely functional interface but uses mutable data internally.

The fact that other answers claim that there can never be any difference and that for instance, the only "drawback" of purely functional code is that it can be parallelized gives you an idea of the informedness/objectivity of the functional programming community on these matters.

Pascal Cuoq
How is the ability to be parallelized a drawback? I'm not really sure what you're trying to say here.
jalf
@jalf I was paraphrasing Jenni's "If the code is written well the performance of the same algorithm should be about the same. The only difference is that purely functional programs can be trivially split up on distributed systems". Does that not mean that the only "drawback" of purely functional code is that it can be parallelized?
Pascal Cuoq
Interestingly, if the question is about the asymptotic performance of best known imperative and purely functional algorithms for the same problem, this answer is the most on-topic of the currently provided answers, provides references, and also has the worst score.
Pascal Cuoq
Pascal: agreed, and I for one was glad to see some real (and unexpected!) info from actual studies. I suspect the downvote may have been to do with the snipe at the FP community -- which may or may not be justified, I don't know, but to me seems to distract from the substantive and relevant portion of your answer.
itowlson
+1: "and also has the worst score". Welcome to StackOverflow!
Jon Harrop
@Pascal I think you were cynical a bit soon. Your answer is now the second highest scoring, and the answers you complain about have been downvoted. I don't think they're indicative of the functional programming community; I think they're indicative of the people on SO who have quick trigger fingers and know a little about functional programming. My answer was inspired by the poor answers here; I came to the question not knowing the answer, thinking I'd learn something, and was appalled by what I saw, so I grabbed my copy of Okasaki and started following references to find out the real answer.
Brian Campbell
+2  A: 

I'd suggest reading on performance of Haskell, and then taking a look at Alioth Language Shootout performances for functional languages vs. procedural/OO ones.

Kornel Kisielewicz
+86  A: 

According to Pippenger [1996], when comparing a Lisp system that is purely functional (and has strict evaluation semantics, not lazy) to one that can mutate data, an algorithm written for the impure Lisp that runs in O(n) can be translated to an algorithm in the pure Lisp that runs in O(n log n) time (based on work by Ben-Amram and Galil [1992] about simulating random access memory using only pointers). Pippenger also establishes that there are algorithms for which that is the best you can do; there are problems which are O(n) in the impure system which are Ω(n log n) in the pure system.

There are a few caveats to be made about this paper. The most significant is that it does not address lazy functional languages, such as Haskell. Bird, Jones and De Moor [1997] demonstrate that the problem constructed by Pippenger can be solved in a lazy functional language in O(n) time, but they do not establish (and as far as I know, no one has) whether or not a lazy functional language can solve all problems in the same asymptotic running time as a language with mutation.

The problem constructed by Pippenger requires Ω(n log n) is specifically constructed to achieve this result, and is not necessarily representative of practical, real-world problems. There are a few restrictions on the problem that are a bit unexpected, but necessary for the proof to work; in particular, the problem requires that results are computed on-line, without being able to access future input, and that the input consists of a sequence of atoms from an unbounded set of possible atoms, rather than a fixed size set. And the paper only establishes (lower bound) results for an impure algorithm of linear running time; for problems that require a greater running time, it is possible that the extra O(log n) factor seen in the linear problem may be able to be "absorbed" in the process of extra operations necessary for algorithms with greater running times. These clarifications and open questions are explored briefly by Ben-Amram [1996].

In practice, many algorithms can be implemented in a pure functional language at the same efficiency as in a language with mutable data structures. For a good reference on techniques to use for implementing purely functional data structures efficiently, see Chris Okasaki's "Purely Functional Data Structures" [Okasaki 1998] (which is an expanded version of his thesis [Okasaki 1996]).

Anyone who needs to implement algorithms on purely-functional data structures should read Okasaki. You can always get at worst an O(log n) slowdown per operation by simulating mutable memory with a balanced binary tree, but in many cases you can do considerably better than that, and Okasaki describes many useful techniques, from amortized techniques to real-time ones that do the amortized work incrementally. Purely functional data structures can be a bit difficult to work with and analyze, but they provide many benefits like referential transparency that are helpful in compiler optimization, in parallel and distributed computing, and in implementation of features like versioning, undo, and rollback.

Note also that all of this discusses only asymptotic running times. Many techniques for implementing purely functional data structures give you a certain amount of constant factor slowdown, due to extra bookkeeping necessary for them to work, and implementation details of the language in question. The benefits of purely functional data structures may outweigh these constant factor slowdowns, so you will generally need to make trade-offs based on the problem in question.

References

Brian Campbell
Pippinger is the undisputed authority on this question. But we should emphasize that his results are *theoretical*, not practical. When it comes to making functional data structures practical and efficient, you can't do better than Okasaki.
Norman Ramsey
Wow, thanks, Brian, this is very interesting to know! Since I struggled with Pippenger's explanation, may I ask you: are these "can't do better" algorithms pathological cases artificially constructed to show where there is an unbridgeable difference between the pure and impure systems, or are they representative of algorithms that are regularly encountered "in the wild"? Thanks!
itowlson
itowlson: I must admit that I did not read enough of Pippenger to answer your question; it was published in a peer reviewed journal, cited by Okasaki, and I read enough of it to determine that his claims are relevant to this question, but not enough to understand the proof. The immediate takeaway that I get for real-world consequences is that it is trivial to convert an O(*n*) impure algorithm into an O(*n* log *n*) pure one, by simply simulating modifiable memory using a balanced binary tree. There are problems that cannot do better than that; I don't know if they're purely theoretical.
Brian Campbell
The Pippenger result makes two important assumptions that limit its scope: it considers "on-line" or "reactive" computations (not the usual model of a computation mapping finite inputs to a single output) and "symbolic" computations where inputs are sequences of atoms, which can be tested only for equality (i.e., the interpretation of the input is extremely primitive).
Chris Conway
Very good answer; I would like to add that for purely functional languages there is no universally agreed upon model for computing complexity, while in the impure world the unit-cost RAM machine is relatively standard (so this makes comparing things more difficult).Also note that the upper bound of a Lg(N) difference in pure/impure can be intuitively explained very easily by looking at an implementation of arrays in a pure language (it costs lg(n) per operation (and you get history)).
The purely functional clean (http://clean.cs.ru.nl) language allows destructive updates through uniqueness types. Can anyone comment on whether or not uniqueness types work around assumptions built into the results cited?
Keith
Never programmed in Clean, but I guess that uniqueness types are similar to the Haskell IO monad, which allows writing imperative code in a referentially transparent way, so they are not affected by these bounds.For instance, in Haskell you can write:var <- readLn: var2 <- readLn, and the two calls to readLn give different results, because of a different (implicit) state parameter. In Clean you can do something similar, and you could have an imperative array library like Haskell. About being "referentially transparent", see (for Haskell) "Tackling the awkward squad..." by Simon Peyton Jones.
Blaisorblade
A: 

With a fixed upper bound on memory usage, there should be no difference.

Proof sketch: Given a fixed upper bound on memory usage, one should be able to write a virtual machine which executes an imperative instruction set with the same asymptotic complexity as if you were actually executing on that machine. This is so because you can manage the mutable memory as a persistent data structure, giving O(log(n)) read and writes, but with a fixed upper bound on memory usage, you can have a fixed amount of memory, causing these to decay to O(1). Thus the functional implementation can be the imperative version running in the functional implementation of the VM, and so they should both have the same asymptotic complexity.

Brian
A fixed upper bound on memory usage isn't how people analyze these sorts of things; you assume an arbitrarily large, but finite memory. When implementing an algorithm, I'm interested in how it will scale from the simplest input up to any arbitrary input size. If you put a fixed upper bound on memory usage, why don't you also put a fixed upper bound on how long you'll allow the computation to take, and say that everything is O(1)?
Brian Campbell
@Brian Campbell: That is true. I'm just suggesting that if you wanted to, you could ignore the difference in the constant factor in many cases in practice. One would still need to be mindful of the difference when compromising between memory and time to make sure that using m times more memory decreases your runtime by at least a factor of log(m).
Brian
+6  A: 

There are indeed several algorithms and data structures for which no asymptotically efficient purely functional solution (t.i. one implementable in pure lambda calculus) is known, even with laziness.

  • The aforementioned union-find
  • Hash tables
  • Arrays
  • Some graph algorithms
  • ...

However, we assume that in "imperative" languages access to memory is O(1) whereas in theory that can't be so and access to memory within a huge dataset is always O(log n), which can be emulated in a functional language.

Also, we must remember that actually all modern functional languages provide mutable data, and Haskell even provides it without sacrificing purity (the ST monad).

jkff
If the dataset fits in physical memory, access to it is O(1) in that it is possible to find an absolute upper bound on the amount of time to read any item. If the dataset does not, you're talking about I/O and that will be the dominant factor by far, however the program is written.
Donal Fellows
Well, of course I'm talking about O(log n) operations of access to external memory. However, in any case I was talking bs: external memory can also be O(1)-addressable...
jkff