views:

1682

answers:

7

I've recently been learning about functional programming (specifically Haskell, but I've gone through tutorials on Lisp and Erlang as well). While I found the concepts very enlightening, I still don't see the practical side of the "no side effects" concept. What are the practical advantages of it? I'm trying to think in the functional mindset, but there are some situations that just seem overly complex without the ability to save state in an easy way (I don't consider Haskell's monads 'easy'). Of course, online tutorials and papers aren't going to help me learn this stuff as well as, say, a college education would, so I may just not be understanding it entirely.

Thus my question would be: Is it worth continuing to learn Haskell (or another purely functional language) in-depth? Is functional/stateless programming actually more productive than procedural? I've heard a lot of people say "yes" to that question but I haven't heard any valid reasons for it. Is it likely that I will continue to use Haskell later, or should I learn it only for the understanding?

Thanks in advance.

EDIT: I just want to add something. At this point I don't care so much about performance or concurrency as much as productivity. So I'm mainly asking if I will be more productive in a functional language (after learning it thoroughly, of course) than a prodecural/object-oriented/whatever.

+3  A: 

Without state, it is very easy to automatically parallelize your code (as CPUs are made with more and more cores this is very important).

Zifre
Yes, I've definitely looked into that. Erlang's concurrency model in particular is very intriguing. However, at this point I don't really care about concurrency as much as productivity. Is there a productivity bonus from programming without state?
musicfreak
@musicfreak , no there isn't a productivity bonus. But as a note, modern FP languages still let you use state if you really need it.
Unknown
Really? Can you give an example of state in a functional language, just so I can see how it's done?
musicfreak
Check out the State Monad in Haskell - http://book.realworldhaskell.org/read/monads.html#x_NZ
rampion
@Unknown: I disagree. Programming without state reduces the occurence of bugs that are due to unforeseen/unintended interactions of different components. It also encourages better design (more reusability, separation of mechanism and policy, and that sort of stuff). It's not always appropriate for the task at hand but in some cases really shines.
Artelius
+27  A: 

You can break your FP virginity on Functional Programming in a Nutshell.

There are lots of advantages to stateless programming, not least of which is dramatically multithreaded and concurrent code. To put it bluntly, mutable state is enemy of multithreaded code. If values are immutable by default, programmers don't need to worry about one thread mutating the value of shared state between two threads, so it eliminates a whole class of multithreading bugs related to race conditions. Since there are no race conditions, there's no reason to use locks either, so immutability eliminates another whole class of bugs related to deadlocks as well.

That's the big reason why functional programming matters, and probably the best one for jumping on the functional programming train. There are also lots of other benefits, including simplified debugging (i.e. functions are pure and do not mutate state in other parts of an application), more terse and expressive code, less boilerplate code compared to languages which are heavily dependent on design patterns, and the compiler can more aggressively optimize your code.

Juliet
Nicely said and to the point!
Chuck Conway
I second this! I believe functional programming will be used much more widely in the future because of its suitability to parallel programming.
Ray Hidayat
@Ray: I would also add distributed programming!
Anton Tykhyy
:S gross, gross
bobobobo
+2  A: 

One advantage of stateless functions is that they permit precalculation or caching of the function's return values. Even some C compilers allow you to explicitly mark functions as stateless to improve their optimisability. As many others have noted, stateless functions are much easier to parallelise.

But efficiency is not the only concern. A pure function is easier to test and debug since anything that affects it is explicitly stated. And when programming in a functional language, one gets in the habit of making as few functions "dirty" (with IO, etc.) as possible. Separating out the stateful stuff this way is a good way to design programs, even in not-so-functional languages.

Functional languages can take a while to "get", and it's difficult to explain to someone who hasn't gone through that process. But most people who persist long enough finally realise that the fuss is worth it, even if they don't end up using functional languages much.

Artelius
That first part is a really interesting point, I'd never thought about that before. Thanks!
musicfreak
@musicfreak: read about memoizing.
Anton Tykhyy
Suppose you have `sin(PI/3)` in your code, where PI is a constant, the compiler could evaluate this function *at compile time* and embed the result in the generated code.
Artelius
+10  A: 

The more pieces of your program are stateless, the more ways there are to put pieces together without having anything break. The power of the stateless paradigm lies not in statelessness (or purity) per se, but the ability it gives you to write powerful, reusable functions and combine them.

You can find a good tutorial with lots of examples in John Hughes's paper Why Functional Programming Matters.

You will be gobs more productive, especially if you pick a functional language that also has algebraic data types and pattern matching (Caml, SML, Haskell).

Norman Ramsey
+7  A: 

Many of the other answers have focused on the performance (parallelism) side of functional programming, which I believe is very important. However, you did specifically ask about productivity, as in, can you program the same thing faster in a functional paradigm than in an imperative paradigm.

I actually find (from personal experience) that programming in F# matches the way I think better, and so it's easier. I think that's the biggest difference. I've programmed in both F# and C#, and there's a lot less "fighting the language" in F#, which I love. You don't have to think about the details in F#. Here's a few examples of what I've found I really enjoy.

For example, even though F# is statically typed (all types are resolved at compile time), the type inference figures out what types you have, so you don't have to say it. And if it can't figure it out, it automatically makes your function/class/whatever generic. So you never have to write any generic whatever, it's all automatic. I find that means I'm spending more time thinking about the problem and less how to implement it. In fact, whenever I come back to C#, I find I really miss this type inference, you never realise how distracting it is until you don't need to do it anymore.

Also in F#, instead of writing loops, you call functions. It's a subtle change, but significant, because you don't have to think about the loop construct anymore. For example, here's a piece of code which would go through and match something (I can't remember what, it's from a project Euler puzzle):

let matchingFactors =
    factors
    |> Seq.filter (fun x -> largestPalindrome % x = 0)
    |> Seq.map (fun x -> (x, largestPalindrome / x))

I realise that doing a filter then a map (that's a conversion of each element) in C# would be quite simple, but you have to think at a lower level. Particularly, you'd have to write the loop itself, and have your own explicit if statement, and those kinds of things. Since learning F#, I've realised I've found it easier to code in the functional way, where if you want to filter, you write "filter", and if you want to map, you write "map", instead of implementing each of the details.

I also love the |> operator, which I think separates F# from ocaml, and possibly other functional languages. It's the pipe operator, it lets you "pipe" the output of one expression into the input of another expression. It makes the code follow how I think more. Like in the code snippet above, that's saying, "take the factors sequence, filter it, then map it." It's a very high level of thinking, which you don't get in an imperative programming language because you're so busy writing the loop and if statements. It's the one thing I miss the most whenever I go into another language.

So just in general, even though I can program in both C# and F#, I find it easier to use F# because you can think at a higher level. I would argue that because the smaller details are removed from functional programming (in F# at least), that I am more productive.

Edit: I saw in one of the comments that you asked for an example of "state" in a functional programming language. F# can be written imperatively, so here's a direct example of how you can have mutable state in F#:

let mutable x = 5
for i in 1..10 do
    x <- x + i
Ray Hidayat
I agree with your post generally, but |> has nothing to do with functional programming per se. Actually, `a |> b (p1, p2)` is just syntactic sugar for `b (a, p1, p2)`. Couple this with right-associativity and you've got it.
Anton Tykhyy
True, I should acknowledge that probably a lot of my positive experience with F# has more to do with F# than it does with functional programming. But still, there is a strong correlation between the two, and even though things like type inference and |> aren't functional programming per se, certainly I would claim they "go with the territory." At least in general.
Ray Hidayat
|> is just another higher-order infix function, in this case a function-application operator. Defining your own higher-order, infix operators is *definitely* a part of functional programming (unless you're a Schemer). Haskell has its $ which is the same except information in the pipeline flows right to left.
Norman Ramsey
+5  A: 

Consider all the difficult bugs you've spent a long time debugging.

Now, how many of those bugs were due to "unintended interactions" between two separate components of a program? (Nearly all threading bugs have this form: races involving writing shared data, deadlocks, ... Additionally, it is common to find libraries that have some unexpected effect on global state, or read/write the registry/environment, etc.) I would posit that at least 1 in 3 'hard bugs' fall into this category.

Now if you switch to stateless/immutable/pure programming, all those bugs go away. You are presented with some new challenges instead (e.g. when you do want different modules to interact with the environment), but in a language like Haskell, those interactions get explicitly reified into the type system, which means you can just look at the type of a function and reason about the type of interactions it can have with the rest of the program.

That's the big win from 'immutability' IMO. In an ideal world, we'd all design terrific APIs and even when things were mutable, effects would be local and well-documented and 'unexpected' interactions would be kept to a minimum. In the real world, there are lots of APIs that interact with global state in myriad ways, and these are the source of the most pernicious bugs. Aspiring to statelessness is aspiring to be rid of unintended/implicit/behind-the-scenes interactions among components.

Brian
Someone once said that overwriting a mutable value means that you are explicitly garbage collecting/freeing the previous value. In some cases other parts of the program weren't done using that value. When values cannot be mutated, this class of bugs also goes away.
shapr
+1  A: 

I wrote a post on just this subject awhile back: On The Importance of Purity.

naasking
Great post, +1.
musicfreak