tags:

views:

281

answers:

2

Both are terms whose type is the intersection of all types (uninhabited). Both can be passed around in code without failing until one attempts to evaluate them. The only difference I can see is that in Java, there is a loophole which allows null to be evaluated for exactly one operation, which is reference equality comparison (==)--whereas in Haskell undefined can't be evaluated at all without throwing an exception. Is this the only difference?

Edit

What I'm really trying to get at with this question is, why was including null in Java such an apparently poor decision, and how does Haskell escape it? It seems to me that the real problem is that you can do something useful with null, namely you can check it for nullness. Because you are allowed to do this, it has become standard convention to pass around null values in code and have them indicate "no result" instead of "there is a logical error in this program". Whereas in Haskell, there's no way to check if a term evaluates to bottom without evaluating it and the program exploding, so it could never be used in such a way to indicate "no result". Instead, one is forced to use something like Maybe.

Sorry if it seems like I'm playing fast and loose with the term "evaluate"... I'm trying to draw an analogy here and having trouble phrasing it precisely. I guess that's a sign that the analogy is imprecise.

+4  A: 

Your description isn't quite correct. You're saying null can't be evaluated. However since java is an eager language, this would mean that f(null) would throw an NPE no matter what the definition of f is (because method arguments are always evaluated before the method runs).

The only reason that you can pass around undefined in haskell without getting an exception is that haskell is lazy and does not evaluate arguments unless needed.

One further difference between undefined and null is that undefined is a simple value defined in the standard library. If it weren't defined in the standard library you could define it yourself (by writing myUndefined = error "My Undefined for example).

In Java null is a keyword. If there were no null keyword, you wouldn't be able to define it (doing the equivalent of the haskell definition, i.e. Object myNull = throw(new Exception()), wouldn't work because the expression would be evaluated right there).

sepp2k
Java is not completely strict. It behaves lazily, for example, when shortcutting boolean expressions. I think it would be just as accurate to say that `null` is a lazily evaluated expression in Java, and it's only evaluated when you say `null.something`. Therefore we can compare it somewhat directly to `undefined` in Haskell.
pelotom
@pelotom: It would not be just as accurate to say that. For example, you could not write `if (foo != null)` if null weren't evaluated by that point (and by inference `foo` wouldn't have been evaluated either, since `foo` could be null!).
Chuck
sepp2k
@Chuck that's the loophole I mentioned... it allows evaluating `null` for reference equality, but nothing else
pelotom
@pelotom: Except JLS Section 15.7 does not have any such loophole.
ILMTitan
@ILMTitan all of Java has this loophole. If you could not "check if something was null" in Java, the `null` keyword would be completely useless, and no one would use it as a token to indicate failure. There would be no "billion dollar mistake". People would be forced to invent a Maybe type and document the partiality of their functions in the type.
pelotom
@pelotom: Haskell doesn't document partiality in the type either! Sure, Java doesn't really have much in the way of a static type system, but that's irrelevant to the meaning of `null`.
camccann
@camccann `Maybe` documents partiality, if the function is expected to be partial in normal use cases
pelotom
@pelotom: No matter how hard you wish it to be true, the JLS says otherwise. If you can't accept the Java Language Specification as the specification of the java language, we have no common priors on which to base a rational discussion.
ILMTitan
@pelotom: Right, and that's the difference: In Haskell you're expected to be explicit about possibly returning a non-value, in Java you aren't. It's a matter of culture and syntactic convenience, not the language itself, as such. You could litter Haskell code with `error` if you wanted, nothing would stop you (other than, perhaps, the crushing disapproval of every other Haskell programmer on the planet).
camccann
+18  A: 

What's the difference between undefined in Haskell and null in Java?

Ok, let's back up a little.

"undefined" in Haskell is an example of a "bottom" value (denoted ⊥). Such a value represents any undefined, stuck or partial state in the program.

Many different forms of bottom exist: non-terminating loops, exceptions, pattern match failures -- basically any state in the program that is undefined in some sense. The value undefined :: a is a canonical example of a value that puts the program in an undefined state.

undefined itself isn't particularly special -- its not wired in -- and you can implement Haskell's undefined using any bottom-yielding expression. E.g. this is a valid implementation of undefined:

 > undefined = undefined

Or exiting immediately (the old Gofer compiler used this definition):

 > undefined | False = undefined

The primary property of bottom is that if an expression evaluates to bottom, your entire program will evaluate to bottom: the program is in an undefined state.

Why would you want such a value? Well, in a lazy language, you can often manipulate structures or functions that store bottom values, without the program being itself bottom.

E.g. a list of infinite loops is perfectly cromulent:

 > let xs = [ let f = f in f 
            , let g n = g (n+1) in g 0
            ]
 > :t xs
 xs :: [t]
 > length xs
 2

I just can't do much with the elements of the list:

 > head xs
 ^CInterrupted.

This manipulation of infinite stuff is part of why Haskell's so fun and expressive. A result of laziness is Haskell pays particularly close attention to bottom values.

However, clearly, the concept of bottom applies equally well to Java, or any (non-total) language. In Java, there are many expressions that yield "bottom" values:

  • comparing a reference against null (though note, not null itself, which is well-defined);
  • division by zero;
  • out-of-bounds exceptions;
  • an infinite loop, etc.

You just don't have the ability to substitute one bottom for another very easily, and the Java compiler doesn't do a lot to reason about bottom values. However, such values are there.

In summary,

  • dereferencing a null value in Java is one specific expression that yields a bottom value in Java;
  • the undefined value in Haskell is a generic bottom-yielding expression that can be used anywhere a bottom value is required in Haskell.

That's how they're similar.

Postscript

As to the question of null itself: why it is considered bad form?

  • Firstly, Java's null is essentially equivalent to adding an implicit Maybe a to every type a in Haskell.
  • Dereferencing null is equivalent to pattern matching for only the Just case: f (Just a) = ... a ...

So when the value passed in is Nothing (in Haskell), or null (in Java), your program reaches an undefined state. This is bad: your program crashes.

So, by adding null to every type, you've just made it far easier to create bottom values by accident -- the types no longer help you. Your language is no longer helping you prevent that particular kind of error, and that's bad.

Of course, other bottom values are still there: exceptions (like undefined) , or infinite loops. Adding a new possible failure mode to every function -- dereferencing null -- just makes it easier to write programs that crash.

Don Stewart
Ah, that's a good point; it's not `null` itself that's the bottom value in java, it's a *dereferencing* of `null`.
pelotom