views:

759

answers:

11

I feel the term rather pejorative. Hence, I am flabbergasted by the two sentences in Wikipedia:

Imperative programming is known for employing side effects to make programs function. Functional programming in turn is known for its minimization of side effects. [1]

Since I am somewhat Math-biased, the latter sounds excellent. What are the arguments for side-effects? Do they mean the loss of control or the acceptance of uncertainty? Are they a good thing?

A: 

Well, it's a lot easier and more intuitive to program with side effects, for one thing. Functional programming is difficult for a lot of people to wrap their head around -- find someone who's taught/TAed a class in Ocaml and you'll probably get all kinds of stories about people's abject failure to comprehend it. And what good is having beautifully designed, wonderfully side effect free functional code if nobody can actually follow it? Makes hiring people to get your software done rather difficult.

That's one side of the argument, at least. There's any number of reasons lots of people are going to have to learn all about functional style, side-effect-less code. Multithreading comes to mind.

Promit
+5  A: 

The jury is still out. And the trial has been going on since the dawn of computing, so don't be expecting a verdict any time soon.

anon
+6  A: 

Pro:

  • In the end side effects are what you want to accomplish.
  • Side effects are natural for code that interacts with outside world.
  • They make many algorithms simple.
  • To avoid using side effects, you need to implement loops by recursion, thus your language implementation needs tail call optimization.

Con:

  • Pure code is easy to parallelize.
  • Side effects can make code complicated.
  • Pure code is easier to prove correct.

For example Haskell, at first it seems very elegant, but then you need to start playing with outside world and it's not so much fun anymore. (Haskell moves state as a function parameter and hides it into things called Monads, which enable you to write in imperative look-a-like style.)

abababa22
Your remark about tail recursion appears to be pro-side effects, not a con.
Stephan202
Side effects make algorithms simple because they are very powerful, and powerful tools should be used sparingly.
Anton Tykhyy
Stephan202: oops you are right, I edited the answer :)
abababa22
For loops, the compiler only needs to optimize tail recursions, not tail calls in general, and this is just too easy.
Ingo
Less vs. more modular is more clear and specific than "complicated", I think.
Tchalvak
+13  A: 

In von-Neumann machines, side effects are things that make the machine work. Essentially, no matter how you write your program, it'll need to do side-effects to work (at a low level view).

Programming without side effects means abstracting side effects away so that you could think about the problem in general -without worrying about the current state of the machine- and reduce dependencies across different modules of a program (be it procedures, classes or whatever else). By doing so, you'll make your program more reusable (as modules do not depend on a particular state to work).

So yes, side-effect free programs are a good thing but side-effects are just inevitable at some level (so they cannot be considered as "bad").

Mehrdad Afshari
+5  A: 

Without side-effects, you simply can't do certain things. One example is I/O, since making a message appear on the screen is, by definition, a side-effect. This is why it's a goal of functional programming to minimize side-effects, rather than eliminate them entirely.

Setting that aside, there are often instances where minimizing side-effects conflicts with other goals, like speed or memory efficiency. Other times, there's already a conceptual model of your problem that lines up well with the idea of mutating state, and fighting against that existing model can be wasted energy and effort.

Pillsy
+2  A: 

Side effects are essential for a significant part of most applications. Pure functions have a lot of advantages. They are easier to think about because you don't have to worry about pre and post-conditions. Since they don't change state, they are easier to parallelize, which will become very important as the processor-count goes up.

Side effects are inevitable. And they should be used whenever they are a better choice than a more complicated but pure solution. The same goes for pure functions. Sometimes a problem is better approached with a functional solution.

It's all good =) You should use different paradigms according to the problem you're solving.

bigmonachus
+14  A: 

Side effects are a necessary evil, and one should seek to minimize/localize them.

Other comments on the thread say effect-free programming is sometimes not as intuitive, but I think that what people consider "intuitive" is largely a result of their prior experience, and most people's experience has a heavy imperative bias. Mainstream tools are becoming more and more functional each day, because people are discovering that effect-free programming leads to fewer bugs (though admittedly sometimes a new/different class of bugs) due to less possibility of separate components interacting via effects.

Almost no one has mentioned performance, and effect-free programming usually has worse performance than effectful, since computers are von-Neumann machines that are designed to work well with effects (rather than being designed to work well with lambdas). Now that we're in the midst of the multi-core revolution, this may change the game as people discover they need to take advantage of cores to gain perf, and whereas parallelization sometimes takes a rocket-scientist to get right effect-fully, it can be easy to get right when you're effect-free.

Brian
Effect-free programming can be as efficient as effectful programming (see mlton.org or caml.inria.fr), but it's true that compiler writers must work harder to make it so.
Norman Ramsey
-1 for the "bad performance" myth.
Ingo
A: 

Without side effects, you can't perform I/O operations; so you can't make a useful application.

hasen j
+18  A: 

Every so often I see a question on SO which forces me to spend half an hour editing a really bad Wikipedia article. The article is now only moderately bad. In the part that bears on your question, I wrote as follows:

In computer science, a function or expression is said to have a side effect if, in addition to producing a value, it also modifies some state or has an observable interaction with calling functions or the outside world. For example, a function might modify a global or a static variable, modify one of its arguments, raise an exception, write data to a display or file, read data, call other side-effecting functions, or launch missiles. In the presence of side effects, a program's behavior depends on past history; that is, the order of evaluation matters. Because understanding an effectful program requires thinking about all possible histories, side effects often make a program harder to understand.

Side effects are essential to enable a program to interact with the outside world (people, filesystems, other computers on networks). But the degree to which side effects are used depends on the programming paradigm. Imperative programming is known for uncontrolled, promiscuous use of side effects. In functional programming, side effects are rarely used. Functional languages such as Standard ML and Scheme do not restrict side effects, but it is customary for programmers to avoid them. The functional language Haskell restricts side effects with a static type system; only a function that produces a result of IO type can have side effects.

Norman Ramsey
+1 Great thanks for the examples.
Masi
+1: for improving Wikipedia.
Masi
ANeves
+2  A: 

It is true, as some people here mention, that without side effects one cannot make a useful application. But from that it does not follow that using side effects in an uncontrolled way is a good thing.

Consider the following analogy: a processor with an instruction set that had no branch instructions would be ansolutely worthless. However, it does not follow that programmers must use gotos all the time. On the contrary, it turned out that structured programming and later OOP languages like Java could do without even having a goto statement, and nobody missed it.

(To be sure, there is still goto in Java - it's now called break, continue and throw.)

Ingo
if, else, while, and for are just as much of gotos are break, continue, and throw are.
hasen j
+1 for analysing replies
Masi
+1  A: 

Side effects are just like any other weapon. They are unquestionably useful, and potentially very dangerous when improperly handled.

Like weapons, you have side effects of all different kinds of all different degrees of lethality.

In C++, side effects are totally unrestricted, thanks to pointers. If a variable is declared as "private", you can still access or change it using pointer tricks. You can even alter variables which aren't in scope, such as the parameters and locals of the calling function. With a little help from the OS (mmap), you can even modify your program's machine code at runtime! When you write in a language like C++, you are elevated to the rank of Bit God, master of all memory in your process. All optimizations the compiler makes to your code are made with the assumption you don't abuse your powers.

In Java, your abilities are more restricted. All variables in scope are at your control, including variables shared by different threads, but you must always adhere to the type system. Still, thanks to a subset of the OS being at your disposal and the existence of static fields, your code may have non-local effects. If a separate thread somehow closes System.out, it will seem like magic. And it will be magic: side effectful magic.

Haskell (despite the propaganda about being pure) has the IO monad, which requires you register all your side effects with the type system. Wrapping your code in the IO monad is like the 3 day waiting period for handguns: you can still blow your own foot off, but not until you OK it with the government. There's also unsafePerformIO and its ilk, which are Haskell IO's black market, giving you side effects with "no questions asked".

Miranda, the predecessor to Haskell, is a pure functional language created before monads became popular. Miranda (as far as I've learned... if I'm wrong, substitute Lambda Calculus) has no IO primitives at all. The only IO done is compiling the program (the input) and running the program and printing the result (the output). Here, you have full purity. The order of execution is completely irrelevant. All "effects" are local to the functions which declare them, meaning never can two disjoint parts of code effect each other. It's a utopia (for mathematicians). Or equivalently a distpia. It's boring. Nothing ever happens. You can't write a server for it. You can't write an OS in it. You can't write SNAKE or Tetris in it. Everyone just kind of sits around looking mathematical.

Tac-Tics