What is the difference between functions in math and functions in programming?
views:
306answers:
5I think the most important distinction is that functions in math (and functional programming) can't change state, while (imperative) programming functions can.
It depends on the domain (I don't mean the domain of the function, I mean the domain of study) and also possibly the language.
In math, a function has an input that maps to only one output for a given input value (vertical line test, remember). In programming, this might not be strictly the same, depending on where you draw the line between "input" and "function logic."
For instance, let's imagine we have a function rand() that reads atmospheric conditions to arrive at a truly random number. Let's also imagine that a calling function takes one integer parameter as a mutiplier of sorts. Is the following a function?:
int giveRandAtmosWithMul(int mult)
{
return mult * rand();
}
In the mathematic sense, it probably is not a function if you consider mult as the only input to the problem, but clearly rand() is offering input as well (even though rand() always has the same entry point in machine code).
As you can see, the differences aren't really objectively definable without some standard protocol that everybody agrees to.
In math, functions don't throw exceptions. :)
A function in computer science is a chunk of code that takes inputs, does something and possibly returns outputs, but it can do a host of things between. It can fetch webpages, send emails, play video, whatever.
In math a function is something very specific and nothing else. A function is usually described as a "machine" that takes in inputs and spits out outputs. While computer science functions do take in inputs, and spit out outputs, they don't have to do so with the precise "the same input always yields the same output" that math requires (e.g. bool IsMyApplicationRunningInFullScreen() returns various values with no inputs at all).
Other answers are correct - these are two different things. I'll show, on the contrary, they are related. I'll denote programming functions by ->
, mathematical functions by =>
.
Suppose you have a language that supports exceptions. In that case, you can think of a programming function A -> B
as a mathematical function A => B + E
where "B + E" means either something of type B
, or something of type E
. Your language has two keywords to return from a function, "return" and "throw".
Now, you can compose two functions f: A -> B
and g: B -> C
by writing g(f(x))
. This is a function that takes an A and returns C. You can do that in many languages, like Java or Python. Under the hood, if f(x)
throws an exception, g
is not called and the exception is propagated - think about g(1/0)
. The language takes care of that and you don't need to check if the result of f
is an exception. The way to describe that mathematically is although f: A => B+E
and g: B => C+E
, the language "lifts" the function g into B+E => C+E
. g
is now a function that might take an exception, but it will simply propagate it.
In other words define g': B+E => C+E
by
/ g(x) if x ∈ B
g'(x)= |
\ x if x ∈ E
Having two functions f: A => B+E
and g': B+E => C+E
you can safely compose them in mathematical sense. And this is what g(f(x))
in programming does, but it has to lift g
into g'
first.
Think about nondeterminism. If you have a function A -> B
that is nondeterministic, then you can describe it mathematically saying this is a function A => Set(B)
where Set(B)
is the set of possible results. For example, if f(1)
might give you 1, 2 or 3, then mathematically f(1) = {1,2,3}
although while programming when asked for f(1)
you'll get only one of these numbers. Now you can compose nondeterministic functions A->B
and B->C
by writing g(f(x))
. The result will be of type C
, but it will be nondeterministic. Mathematically, composing functions A => Set(B)
and B => Set(C)
gives you A => Set(C)
. Although g
takes only one value of B
, you have to lift it to nondeterministic values g': Set(B) => Set(C)
. For example, g'({1,2})
is union of sets g(1)
and g(2)
. So g(f(x))
means you run f
, take set of all possible results and run g
on each of them. There are two layers of nondeterminism, but they are "flattened" into one.
Same with global state. If you make every global variable as an argument of a function and the result, you can think there are no global variables, every function takes all global state and the language has to hand over the state when composing. A function A -> B
reading and possibly writing state S
is a function (A,S) => (B,S)
, which can also be written as A => (S => (B,S))
. Writing this formally is more complicated, but it's the same pattern.
Same can be done with input/output, I described that in another SO answer.
The "magic" that allows to compose "effectful" functions is:
- A way to make the type effectful. For example, turn
B
intoB+E
orSet(B)
. I'll denote effectful version of typeX
asF(X)
. - A way to lift the functions:
B -> F(C)
intoF(B) -> F(C)
. It allows to compose functionsA -> F(B)
andB -> F(C)
. - The keyword
return
must turn an ordinary valueA
intoA+E
, or singletonSet(A)
. So it's type must beX -> F(X)
.
A structure composed of those three is called a monad. Monads allow to describe those side effects . A monad might also have some specific functions, for example the exception monad has throw
, the nondeterminism monad has fork
, the state monad has get/put
, the IO monad has read/write
etc.
The moral is: If you regard special effects like randomizing, exceptions, nondeterminism, input/output as a part of the result of a function, then every function is referentially transparent and functions in imperative programming are really mathematical functions, but with very strange result types that also describe special effects. This is the approach taken by pure functional languages like Haskell.