views:

942

answers:

8

When would you NOT want to use functional programming? What is it not so good at?

I am more looking for disadvantages of the paradigm as a whole, not things like "not widely used", or "no good debugger available". Those answers may be correct as of now, but they deal with FP being a new concept (an unavoidable issue) and not any inherent qualities.

Related:

+13  A: 

If your language does not provide good mechanisms to plumb state/exception behavior through your program (e.g. syntax sugars for monadic binds) then any task involving state/exceptions becomes a chore. (Even with these sugars, some people might find it harder to deal with state/exceptions in FP.)

Functional idioms often do lots of inversion-of-control or laziness, which often has a negative impact on debugging (using a debugger). (This is somewhat offset by FP being much less error-prone due to immutability/referential transparency, which means you'll need to debug less often.)

Brian
"immutability/referential transparency, which means you'll need to debug less often" ... and since everything is built of little independent functions, you can just test those directly; if each function is (a) a correct little function or (b) a correct composition of two or more correct little functions then wham! your program is correct.
Jared Updike
A: 

I think they're less intuitive than imperative languages, but perhaps that's just the way I learned to program. Regardless, I think there's a lot of people that think the same way, and as a result there's not a very large talent pool for functional programming.

Kaleb Brasee
+2  A: 

Here are some problems I've run into:

  1. Most people find functional programming to be difficult to understand. This means it will probably be harder for you to write functional code, and it will almost certainly be harder for someone else to pick it up.
  2. Functional programming languages are usually slower than a language like c would be. This is becoming less of an issue over time (because computers are getting faster, and compilers are getting smarter)
  3. Not being as wide spread as their imperative counterparts, it can be difficult to find libraries and examples for common programming problems. (For example its almost always easier to find something for Python, then it is for Haskell)
  4. There's a lack of tools, particularly for debugging. Its definitely not as easy as opening up Visual Studio for C#, or eclipse for Java.
Caleb
Do you have any figures or references to support number 2? Also for number 4, F# will be a first class fully supported language in Visual Studio 2010
Russ Cam
I think bullets 2-4 are not intrinsic to functional programming, but more artifacts of history/culture/etc. (That is, though they may be true, they are not true 'because of FP', I think.)
Brian
Re 1: I don't think that's true. Excel is a functional programming language, and I haven't observed it being harder to understand than, say, C, BASIC, Pascal or Python. In fact, it's probably the other way around.
Jörg W Mittag
Re 2: Languages cannot be slower (or faster) than another language. Languages are just abstract rules, you cannot execute them. Only *implementations* can be slower or faster than other implementations, but then you are no longer talking about languages. In the end, to solve the same problem you need to take the same steps, therefore the performance is going to be the same. The Supero Haskell compiler for instance produces code that runs 10% faster than hand-optimized C code compiled by GCC. Well-implemented Scheme compilers produce code that is between half as fast and twice as fast as GCC.
Jörg W Mittag
Slower? Functional programming language implementations tend to be on the faster end of the pool. Almost every compiler in existence tends to produce slower programs than C, so I don't see how that reflects on functional programming at all.
Chuck
Re 2 cont'd: the GHC Haskell compiler, ATS, Scala and O'Caml seem to consistently perform comparable to Java. The SISAL programming language was known to consistently perform within 20% of hand-optimized C and Fortran on single-processor machines and outperform them on multiprocessor Cray supercomputers, in one case, a FFT routine written in SISAL running on a Cray supercomputer outperformed the FFT routine shipped *by Cray* and hand-optimized by their engineers.
Jörg W Mittag
Re 4: I'm pretty sure, anybody who has ever used the Lisp Machine IDE in the 1990s would be amazed about how crappy Eclipse and Visual Studio *still* are, almost 20 years later. Anyway, this has nothing to do with functional programming. How good Visual Studio is is a feature of Visual Studio, not imperative programming. In fact, the F# Visual Studio plugin has pretty much the exact same features as the C# and VB.NET plugins. And where there is functionality missing, it has nothing to with functional programming and everything to do with the amount of money Microsoft has allocated for F# v C#.
Jörg W Mittag
+5  A: 

Aside from speed or adoption issues and addressing a more basic issue, I've heard it put that with functional programming, it's very easy to add new functions for existing datatypes, but it's "hard" to add new datatypes. Consider:

(Written in SMLnj. Also, please excuse the somewhat contrived example.)

datatype Animal = Dog | Cat;

fun happyNoise(Dog) = "pant pant"
  | happyNoise(Cat) = "purrrr";

fun excitedNoise(Dog) = "bark!"
  | excitedNoise(Cat) = "meow!";

I can very quickly add the following:

fun angryNoise(Dog) = "grrrrrr"
  | angryNoise(Cat) = "hisssss";

However, if I add a new type to Animal, I have to go through each function to add support for it:

datatype Animal = Dog | Cat | Chicken;

fun happyNoise(Dog) = "pant pant"
  | happyNoise(Cat) = "purrrr"
  | happyNoise(Chicken) = "cluck cluck";

fun excitedNoise(Dog) = "bark!"
  | excitedNoise(Cat) = "meow!"
  | excitedNoise(Chicken) = "cock-a-doodle-doo!";

fun angryNoise(Dog) = "grrrrrr"
  | angryNoise(Cat) = "hisssss"
  | angryNoise(Chicken) = "squaaaawk!";

Notice, though, that the exact opposite is true for object-oriented languages. It's very easy to add a new subclass to an abstract class, but it can be tedious if you want to add a new abstract method to the abstract class/interface for all subclasses to implement.

Ben Torell
If you implemented these as subclasses of an abstract class in an OO, you'd have to write all those new functions as well. The only difference is how you organize the functions (by type or by behavior).
Chuck
@Chuck, True, fair enough. The idea I was trying to get at is that in OOP, while you still have to write the implementations of your methods, it's all done internal to the class. By adding a new subclass, you don't have to modify any siblings or parents. In fact, decent IDEs will auto-fill a skeleton subclass for you when you create it with blank methods for you to implement. But if you add a new method to the superclass, it breaks all implementing classes. The reverse is true for functional. Point well-taken.
Ben Torell
This has been named the *Expression Problem* by none other than Philip Wadler.
Jörg W Mittag
Wadler calls this the expression problem: http://en.wikipedia.org/wiki/Expression_Problem
Jared Updike
What you have are algebraic datatypes - They are considered closed, but extensible! If you want extensibility, you need inheritance or typeclasses/existentials.
Dario
+9  A: 

One big disadvantage to functional programming is that on a theoretical level, it doesn't match the hardware as well as most imperative languages. (This is the flip side of one of its obvious strengths, being able to express what you want done rather than how you want the computer to do it.)

For example, functional programming makes heavy use of recursion. This is fine in pure lambda calculus because mathematics' "stack" is unlimited. Of course, on real hardware, the stack is very much finite. Naively recursing over a large dataset can make your program go boom. Most functional languages optimize tail recursion so that this doesn't happen, but making an algorithm tail recursive can force you to do some rather unbeautiful code gymnastics (e.g., a tail-recursive map function creates a backwards list or has to build up a difference list, so it has to do extra work to get back to a normal mapped list in the correct order compared to the non-tail-recursive version).

(Thanks to Jared Updike for the difference list suggestion.)

Chuck
This highlights an interesting problem with FP: programming effectively in FP requires you to know certain tricks---especially dealing with laziness. In your example, it is actually easy to keep your code tail recursive (using a strict left fold) and avoid having things blow up on you. You don't have build the list backwards and reverse the return list. The trick is to use difference lists: http://en.wikipedia.org/wiki/Difference_list . A lot of these sorts of tricks are not so easy to figure out on your own. Luckily the Haskell community is super friendly (IRC channel, mailing lists).
Jared Updike
Thanks, Jared. Good info. In defense of my description, though, the OCaml standard library does it the way I said (stack-limited `map` and tail-recursive `rev_map`).
Chuck
+8  A: 

Philip Wadler wrote a paper about this (called Why No One Uses Functional Programming Languages) and addressed the practical pitfalls stopping people from using FP languages:

Update: inaccessible old link for those with ACM access:

Jared Updike
please post the relevant text of the articles. :D
CrazyJugglerDrummer
@CrazyJugglerDrummer: I think that whole article is about this ;-)
Hynek -Pichi- Vychodil
I know, but I'd much rather be able to look at it somehow without downloading and opening it. Is that possible?
CrazyJugglerDrummer
Sorry about the inaccessible link. I would post HTML of text but the PS/PDF is actually an image and I don't have OCR software on hand. I suppose I could post a PDF of it somewhere. Not sure why ACM hides some of these older articles; don't they want to disseminate this information.
Jared Updike
Pavel Savara
+2  A: 

I just wanted to buzz in with an anecdote because I'm learning Haskell right now as we speak. I'm learning Haskell because the idea of separating functions from actions appeals to me and there are some really sexy theories behind implicit parallelization because of the isolation of pure functions from non-pure functions.

I've been learning the fold class of functions now for three days. Fold seems to have a very simple application: taking a list and reducing it to a single value. Haskell implements a foldl, and foldr for this. The two functions have massively different implementations. There is an alternate implementation of foldl, called foldl'. On top of this there is version with a slightly different syntax called foldr1 and foldl1 with different initial values. Of which there is a correspond implementation of foldl1' for foldl1. As if all of this wasn't mind blowing, the functions that fold[lr].* require as arguments and use internally in the reduction have two separate signatures, only one variant works on infinite lists (r), and only one of them is executed in constant memory (as I understand (L) because only it requires a redex). Understanding why foldr can work on infinite lists requires at least a decent understanding of the languages lazy-behavoir and the minor detail that not all functions will force the evaluation of second argument. The graphs online for these functions are confusing as hell for someone who never saw them in college. There is no perldoc equivalent. I can't find a single description of what any of the functions in the Haskell prelude do. The prelude is a kinda of a distribution preloaded that comes with core. My best resource is really a guy I've never met (Cale) who is helping me at a huge expense to his own time.

Oh, and fold doesn't have to reduce the list to a non-list type scalar, the identity function for lists can be written foldr (:) [] [1,2,3,4] (highlights that you can accumulate to a list).

/me goes back to reading.

Evan Carroll
+10  A: 
Norman Ramsey