views:

203

answers:

2

Why do some languages differentiate between methods that return a value and methods that don't?

i.e. in Oracle's PL/SQL, where the primary difference between a function and a procedure is that the function must return a value, and the procedure must not.

Likewise for languages that don't, why not?


EDIT: I have found a related question that might interest people reading this question:

+1  A: 

In a pure or effect-typed setting there is a world of difference, because obviously methods that "don't return anything" are only useful for their side effects.

This is analogous to the distinction between expressions and statements, which can declutter a language and eliminate a class of usually-mistaken programs (which, of course, is why C doesn't do it ;)).

To give one tiny example, when you distinguish clearly between expressions and statements, if(x = 3) , as opposed to if(x == 3) is syntactically incorrect (for using a statement where an expression was expected) and not merely a type error (for using an integer where a boolean was expected). This has the benefit of also disallowing if(x = true) which would be permitted by a type-based rule in a context where assignments are expressions which have the value of their right operand.

In a language which encapsulates effects with monads, the important distinction becomes the one between:

  • functions that return () which are pure functions and can only return one useless empty value called () or diverge
  • functions that return IO () (or unit in some other monad) which are functions with no "result" except effects in the IO (or whichever) monad
Doug McClean
Great explanation, however, the example expressions seem to presume C.
RBarryYoung
C clearly distinguishes between expressions and statements. However, `x = foo` is an expression, evaluating to the value assigned. This makes certain idioms much more concise (e.g. fork, indefinite iteration).
Novelocrat
According to it's own definitions, it does. According to more common definitions, it doesn't. `x = foo` has a side effect (essentially it *is* a side effect, it's "value" is defined for convenience just so that C can treat it as an expression). I don't understand your point about indefinite iteration, could you give an example? `while(true) { whatever }` seems concise-enough to me, and I can't see an obvious way to do better by treating assignments as expressions? Also, C's "distinction" breaks down because (AFAIK) any expression can be used as a statement, not the case in pickier languages.
Doug McClean
+18  A: 

Because in the original conceptions of Computer Science theory and practice, Functions and Subroutines had virtually nothing to do with each other.

FORTRAN is usually credited as the first language that implemented both of these and demonstrated the distinctions. (Early LISP had a somewhat opposing role in this also, but it had little impact outside of academia).

Following from the traditions of mathematics (which CS was still part of in the 60's) functions were only seen as the encapsulation of parametrized mathematical calculations solely intended to return a value into a larger expression. That you could call it "bare" (F = AZIMUTH(SECONDS)) was merely a trivial use case.

Subroutines, on the other hand were seen as a way to name a group of statements meant to have some effect. Parameters were a huge boost to their usability and the only reason that they were allowed to return modified parameter values was so that they could report their status without having to rely on global variables.

So, they really had no conceptual connection, other than encapsulation and parameters.

The real question, is: "How did so many developers come to see them as the same?"

And the answer to that is C.

When K+R originally designed their high-level macro assembler type language for the PDP-11 (may have started on the PDP-8?), they had no delusions of hardware independence. Virtually every "unique" feature of the language was a reflection of the PDP machine language and architecture (see i++ and --i). One of these was the realization the functions and subroutines could be (and always was) implemented identically in the PDP excpet that the caller just ignored the return value (in R0 [, R1]) for subroutines.

Thus was born the void pointer, and after the C language had taken over the whole world of programming, the misperception that this HW/OS implementation artifact (though true on almost every subsequent platform) was the same as the language semantics.

RBarryYoung
what a fantastic answer!
Colin Pickard