views:

424

answers:

7

I've been contemplating programming language designs, and from the definition of Declarative Programming on Wikipedia:

This is in contrast from imperative programming, which requires a detailed description of the algorithm to be run.

and further down:

... Any style of programming that is not imperative. ...

It then goes on to express that functional languages, because they are not imperative, are declarative by their very nature.

However, this makes me wonder, are purely functional programming languages able to solve any algorithmic problem, or are the constraints based upon what functions are available in that language?

I'm mostly interested in general thoughts on the subject, although if specific examples can illustrate the point, I certainly welcome them.

+12  A: 

Yes, Haskell, Erlang, etc. are Turing complete languages. In principle, you don't need mutable state to solve a problem, since you can always create a new object instead of mutating the old one. Of course, Brainfuck is also Turing complete. In other words, just because an algorithm can be expressed in a functional language doesn't mean it's not horribly awkward.

dsimcha
That's a very good point, this really boils down to Turing completeness.
NilObject
An interesting question to pose: which common algorithm is most elegantly expressed imperatively?
An obvious one would be summing the elements of an array. Of course it can be done in a functional language, but it would be horribly inefficient and unnecessarily complicated.
dsimcha
dsimcha, unless I'm horribly misunderstanding what you mean by 'summing the elements of an array', I would be inclined to believe that you have absolutely no idea what you are talking about. In Haskell:sumArray = Fold.foldl' (+) 0
Alasdair
Fair enough, but that's just wrapping the ugliness. It's still probably doing a whole bunch of recursion under the hood, compared to a simple foreach loop in an imperative language. If you're going to allow high-level libraries, then the answer in any language could just be sum(array).
dsimcha
The code to sum the elements of an array in pretty much any functional language would be 'fold (+) a' or even just 'sum a'; I don't see what's so complicated about that. The machine code that is executed is exactly the same as in the imperative version. And, it can be automatically parallelized.
Jörg W Mittag
You've convinced me. Summing an array is easier in an imperative language when implemented in terms of the lowest level primitives (looping vs. recursion w/ progressively smaller arrays, what I had in mind), but when you consider higher level ways, the functional can admittedly be just as elegant.
dsimcha
Well if you're going to allow iterators with foreach, it'd only be fair to compare to pattern matching on lists, which would make the implementation of a sum or fold method quite easy. And, "just as elegant"? How does a foreach loop compare to "sum arr"?
MichaelGG
+17  A: 

According to the Church-Turing Thesis ,

the three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent"

where Turing machine can be read as "procedural" and lambda calculus as "functional".

Steve B.
A: 

Using state monads you can program in an imperative style in Haskell.

So the assertion that Haskell is declarative by its very nature needs to be taken with a grain of salt. On the positive side it then is equivalent to imperative programming languages, also in a practical sense which doesn't completely ignore efficiency.

starblue
A: 

While I completely agree with the answer that invokes Church-Turing thesis, this begs an interesting question actually. If I have a parallel computation problem (which is not algorithmic in a strict mathematical sense), such as multiple producer/consumer queue or some network protocol between several machines, can this be adequately modeled by Turing machine? It can be simulated, but if we simulate it, we lose the purpose why we have the parallelism in the problem (because then we can find simpler algorithm on the Turing machine). So what if we were not to lose parallelism inherent to the problem (and thus the reason why are we interested in it), we couldn't remove the notion of state?

Purely functional programs can (in principle) be evaluated in any order, including in parallel. This doesn't necessarily happen in practice though. There is some research being done on specifying an evaluation strategy separately from the program. That way, you could write a program and later specify how you want it parallelized.
Jørgen Fogh
+1  A: 

The big difference with functional style programming is that it avoids mutable state. Where imperative programming will typically update variables, functional programming will define new, read-only values.

The main place where this will hit performance is with algorithms that use updatable arrays. An imperative implementation can update an array element in O(1) time, while the best a purely functional style of implementation can achieve is O(log N) (using a sorted tree).

Note that functional languages generally have some way to use updateable arrays with O(1) access time (e.g., Haskell provides this with its state transformer monad). However, this is arguably an imperative programming method... nothing wrong with that; you want to use the best tools for a particular job, after all.

The functional style of O(log N) incremental array update is not all bad, though, as functional style algorithms seem to lend themselves well to parallellization.

Rather than using the state transformer monad you can use the Data.Array.Diff module for functional arrays with an O(1) update operation. Under the hood it's implemented using a mutable array, but it provides a purely functional interface allowing it to be used in pure code.
Alasdair
+3  A: 

OK, so Church and Turing provied it is possible, but how do we actually do something?

Rewriting imperative code in pure functional style is an exercise I frequently assign to undergraduate students:

  • Each mutable variable becomes a function parameter
  • Loops are rewritten using recursion
  • Each goto is expressed as a function call with arguments

Sometimes what comes out is a mess, but often the results are surprisingly elegant. The only real trick is not to pass arguments that never change, but instead to let-bind them in the outer environment.

Norman Ramsey
Great thoughts. Thanks for your input.
NilObject
A: 

I remember reading somewhere that there are problems which are provably harder when solved in a purely functional manner, but I can't seem to find the reference.

As noted above, the primary problem is array updates. While the compiler may use a mutable array under the hood in some conditions, it must be guaranteed that only one reference to the array exists in the entire program.

Not only is this a hard mathematical fact, it is also a problem in practice, if you don't use impure constructs.

On a more subjective note, stating that all Turing complete languages are equivalent is only true in a narrow mathematical sense. Paul Graham explores the issue in Beating the Averages in the section "The Blub Paradox."

Formal results such as Turing-completeness may be provably correct, but they are not necessarily useful. The travelling salesman problem may be NP-complete, and yet salesman travel all the time. It seems they don't feel the need to follow an "optimal" path, so the theorem is irrelevant.

NOTE: I am not trying to bash functional programming, since I really like it. It is just important to remember that it is not a panacea.

Jørgen Fogh