views:

123

answers:

5

Generally, I have a headache because something is wrong with my reasoning:

  1. For 1 set of arguments, referential transparent function will always return 1 set of output values.

  2. that means that such function could be represented as a truth table (a table where 1 set of output parameters is specified for 1 set of arguments).

  3. that makes the logic behind such functions is combinational (as opposed to sequential)

  4. that means that with pure functional language (that has only rt functions) it is possible to describe only combinational logic.

The last statement is derived from this reasoning, but it's obviously false; that means there is an error in reasoning. [question: where is error in this reasoning?]

UPD2. You, guys, are saying lots of interesting stuff, but not answering my question. I defined it more explicitly now. Sorry for messing up with question definition!

+2  A: 

As far as I understand it, referential transparency just means: A given function will always yield the same result when invoked with the same arguments. So, the mathematical functions you learned about in school are referentially transparent.

A language you could check out in order to learn how things are done in a purely functional language would be Haskell. There are ways to use "updateable storage possibilities" like the Reader Monad, and the State Monad for example. If you're interested in purely functional data structures, Okasaki might be a good read.

And yes, you're right: Order of evaluation in a purely functional language like haskell does not matter as in non-functional languages, because if there are no side effects, there is no reason to do someting before/after something else -- unless the input of one depends on the output of the other, or means like monads come into play.

I don't really know about the truth-table question.

danlei
Note to the reader: there really isn't anything mystical about a "Monad". Haskell doesn't allow C++-style overloading; it uses type classes. "Monad" is just a type class that many types subscribe to so they get to use the pretty `>>=` and `return` functions. When you see `IO Int`, just think of an action that, if it were executed, yields an Int. You use the `>>=` (bind) operator to build compound actions rather than writing instructions that are executed step by step. The `>>=` operator is used for other types in funky and interesting ways. Don't be afraid of the 'M' word.
Joey Adams
Yes, and basically, you could for example force order of evaluation only by chaining function calls, without even touching monads.
danlei
@joey: thanks, monads are 10% less scary now :-)
eruciform
@danlei: Chaining function calls in and of itself shouldn't force order of execution in a lazy language, at least not in general. The functions themselves would have to be written that way. IO's bind operator is such a function, which is why it can be used for sequencing actions.
Chuck
@Chuck: You're right, thanks for the clarification. What I wanted to say was, that if we (resp. the toplevel) evaluate something, everything depending on it has to be evaluated. If we imagine a chain of dependencies, passing some kind of "world state" around, we basically have something like the IO monad (Ok, it's not that easy this version would allow time travelling), which can be imagined as implicitly passing the world around. My last comment was indeed too general and also not well expressed. Hopefully I got it right this time. :)
danlei
s/depending on it/it depends on/
danlei
+2  A: 

Edit: Although I apparently missed the bullseye on the actual question, I think my answer is pretty good, so I'm keeping it :-) (see below).

I guess a more concise way to phrase the question might be: can a purely functional language compute anything an imperative one can?

First of all, suppose you took an imperative language like C and made it so you can't alter variables after defining them. E.g.:

int i;

for (i = 0;  // okay, that's one assignment
     i < 10; // just looking, that's all
     i++)    // BUZZZ!  Sorry, can't do that!

Well, there goes your for loop. Do we get to keep our while loop?

while (i < 10)

Sure, but it's not very useful. i can't change, so it's either going to run forever or not run at all.

How about recursion? Yes, you get to keep recursion, and it's still plenty useful:

int sum(int *items, unsigned int count)
{
    if (count) {
        // count the first item and sum the rest
        return *items + sum(items + 1, count - 1);
    } else {
        // no items
        return 0;
    }
}

Now, with functions, we don't alter state, but variables can, well, vary. Once a variable passes into our function, it's locked in. However, we can call the function again (recursion), and it's like getting a brand new set of variables (the old ones stay the same). Although there are multiple instances of items and count, sum((int[]){1,2,3}, 3) will always evaluate to 6, so you can replace that expression with 6 if you like.

Can we still do anything we want? I'm not 100% sure, but I think the answer is "yes". You certainly can if you have closures, though.


You have it right. The idea is, once a variable is defined, it can't be redefined. A referentially transparent expression, given the same variables, always yields the same result value.

I recommend looking into Haskell, a purely functional language. Haskell doesn't have an "assignment" operator, strictly speaking. For instance:

my_sum numbers = ??? where
    i     = 0
    total = 0

Here, you can't write a "for loop" that increments i and total as it goes along. All is not lost, though. Just use recursion to keep getting new is and totals:

my_sum numbers = f 0 0 where
    f i total =
        if i < length numbers
            then f i' total'
            else total
        where
            i' = i+1
            total' = total + (numbers !! i)

(Note that this is a stupid way to sum a list in Haskell, but it demonstrates a method of coping with single assignment.)

Now, consider this highly imperative-looking code:

main = do
    a <- readLn
    b <- readLn
    print (a + b)

It's actually syntactic sugar for:

main =
    readLn >>= (\a ->
    readLn >>= (\b ->
    print (a + b)))

The idea is, instead of main being a function consisting of a list of statements, main is an IO action that Haskell executes, and actions are defined and chained together with bind operations. Also, an action that does nothing, yielding an arbitrary value, can be defined with the return function.

Note that bind and return aren't specific to actions. They can be used with any type that calls itself a Monad to do all sorts of funky things.

To clarify, consider readLn. readLn is an action that, if executed, would read a line from standard input and yield its parsed value. To do something with that value, we can't store it in a variable because that would violate referential transparency:

a = readLn

If this were allowed, a's value would depend on the world and would be different every time we called readLn, meaning readLn wouldn't be referentially transparent.

Instead, we bind the readLn action to a function that deals with the action, yielding a new action, like so:

readLn >>= (\x -> print (x + 1))

The result of this expression is an action value. If Haskell got off the couch and performed this action, it would read an integer, increment it, and print it. By binding the result of an action to a function that does something with the result, we get to keep referential transparency while playing around in the world of state.

Joey Adams
I literally LOLed at "If Haskell got off the couch…" Darn lazy languages these days.
Chuck
i read you answer about 10 time, but still can't get - where is the change of state? what function should have had states? I still see only combinational logic
Halst
may be you could give an imperative-syntaxed example? - if you avoid variables assignment (except for function return value) and keep functions referential transparent - it shouldn't it do the same trick?
Halst
Do I understand right - without recursion there is no way to have states in pure functional program?
Halst
A: 

The error in Your reasoning is the following:
"that means that such function could be represented as a truth table".

You conclude that from a functional language's property of referential transparency. So far the conclusion would sound plausible, but You oversee that a function is able to accept collections as input and process them in contrast to the fixed inputs of a logic gate.

Therefore a function does not equal a logic gate but rather a construction plan of such a logic gate depending on the actual (at runtime determined) input!

To comment on Your comment: Functional languages can - although stateless - implement a state machine by constructing the states from scratch each time they are being accessed.

Dave
+1  A: 

Here's my stab at answering the question:

Any system can be described as a combinatorial function, large or small.

There's nothing wrong with the reasoning that pure functions can only deal with combinatorial logic -- it's true, just that functional languages hide that from you to some extent or another.

You could even describe, say, the workings of a game engine as a truth table or a combinatorial function.

You might have a deterministic function that takes in "the current state of the entire game" as the RAM occupied by the game engine and the keyboard input, and returns "the state of the game one frame later". The return value would be determined by the combinations of the bits in the input.

Of course, in any meaningful and sane function, the input is parsed down to blocks of integers, decimals and booleans, but the combinations of the bits in those values is still determining the output of your function.

Keep in mind also that basic digital logic can be described in truth tables. The only reason that that's not done for anything more than, say, arithmetic on 4-bit integers, is because the size of the truth table grows exponentially.

Rei Miyasaka
In the core Your answer says the same things that I tried to explain in mine.
Dave
+5  A: 

Question: where is error in this reasoning?

A referentially transparent function might require an infinite truth table to represent its behavior. You will be hard pressed to design an infinite circuit in combinatory logic.

Another error: the behavior of sequential logic can be represented purely functionally as a function from states to states. The fact that in the implementation these states occur sequentially in time does not prevent one from defining a purely referentially transparent function which describes how state evolves over time.

Norman Ramsey
+1 That statement goes along with my answer
Dave