views:

1751

answers:

10

Every so often when programmers are bitching about null errors/exceptions someone asks what we do without null.

I myself have some basic idea of the coolness of option types but I don't have the knowledge or languages skill to best express it. It would be useful if someone could point to or write an GREAT explanation of

  • The undesirability of having having references/pointers be nullable by default
  • How option types work including strategies to ease checking null cases such as
    • pattern matching and
    • monadic comprehensions
  • Alternative solution such as message eating nil
  • (other aspects I missed)

written in a way approachable to the average programmer that we could point that person towards.

+10  A: 

The undesirability of having having references/pointers be nullable by default.

I don't think this is the main issue with nulls, the main issue with nulls is that they can mean two things:

  1. The reference/pointer is uninitialized: the problem here is the same as mutability in general. For one, it makes it more difficult to analyze your code.
  2. The variable being null actually means something: this is the case which Option types actually formalize.

Languages which support Option types typically also forbid or discourage the use of uninitialized variables as well.

How option types work including strategies to ease checking null cases such as pattern matching.

In order to be effective, Option types need to be supported directly in the language. Otherwise it takes a lot of boiler-plate code to simulate them. Pattern-matching and type-inference are two keys language features making Option types easy to work with. For example:

In F#:

//first we create the option list, and then filter out all None Option types and 
//map all Some Option types to their values.  See how type-inference shines.
let optionList = [Some(1); Some(2); None(), Some(3); None()]
optionList |> List.choose //evaluates to [1;2;3]

//here is a simple pattern-matching example
//which prints "1;2;None;3;None;".
//notice how value is extracted from op during the match
optionList |> List.iter (fun op -> match op with
                                   | Some(value) -> printfn "%A;" value
                                   | None -> printfn "None;")

However, in a language like Java without direct support for Option types, we'd have something like:

//here we perform the same filter/map operation as in the F# example.
List<Option<Integer>> optionList = Arrays.asList(new Some<Integer>(1),new Some<Integer>(2),new None<Integer>(),new Some<Integer>(3),new None<Integer>());
List<Integer> filteredList = new ArrayList<Integer>();
for(Option<Integer> op : list)
    if(op instanceof Some)
        filteredList.add(((Some<Integer>)op).getValue());

Alternative solution such as message eating nil

Objective-C's "message eating nil" is not so much a solution as an attempt to lighten the head-ache of null checking. Basically, instead of throwing a runtime exception when trying to invoke a method on a null object, the expression instead evaluates to null itself. Suspending disbelief, it's as if each instance method begins with if (this == null) return null;. But then there is information loss: you don't know whether the method returned null because it is valid return value, or because the object is actually null. It's a lot like exception swallowing, and doesn't make any progress addressing the issues with null outlined before.

Stephen Swensen
This is a pet peeve but c# is hardly a c-like language.
Roman A. Taycher
I was going for Java here, since C# would probably have a nicer solution... but I appreciate your peeve, what people really mean is "a language with c-inspired syntax". I went ahead and replaced the "c-like" statement.
Stephen Swensen
With linq, right. I was thinking of c# and didn't notice that.
Roman A. Taycher
Yes with c inspired syntax mostly, but I think I have also heard of imperative programming languages like python/ruby with very little in the way of c like syntax referred to as c-like by functional programmers.
Roman A. Taycher
+6  A: 

Assembly brought us addresses also known as untyped pointers. C mapped them directly as typed pointers but introduced Algol's null as a unique pointer value, compatible with all typed pointers. The big issue with null in C is that since every pointer can be null, one never can use a pointer safely without a manual check.

In higher-level languages, having null is awkward since it really conveys two distinct notions:

  • Telling that something is undefined.
  • Telling that something is optional.

Having undefined variables is pretty much useless, and yields to undefined behavior whenever they occur. I suppose everybody will agree that having things undefined should be avoided at all costs.

The second case is optionality and is best provided explicitly, for instance with an option type.


Let's say we're in a transport company and we need to create an application to help create a schedule for our drivers. For each driver, we store a few informations such as: the driving licences they have and the phone number to call in case of emergency.

In C we could have:

struct PhoneNumber { ... };
struct MotorbikeLicence { ... };
struct CarLicence { ... };
struct TruckLicence { ... };

struct Driver {
  char name[32]; /* Null terminated */
  struct PhoneNumber * emergency_phone_number;
  struct MotorbikeLicence * motorbike_licence;
  struct CarLicence * car_licence;
  struct TruckLicence * truck_licence;
};

As you observe, in any processing over our list of drivers we'll have to check for null pointers. The compiler won't help you, the safety of the program relies on your shoulders.

In OCaml, the same code would look like this:

type phone_number = { ... }
type motorbike_licence = { ... }
type car_licence = { ... }
type truck_licence = { ... }

type driver = {
  name: string;
  emergency_phone_number: phone_number option;
  motorbike_licence: motorbike_licence option;
  car_licence: car_licence option;
  truck_licence: truck_licence option;
}

Let's now say that we want to print the names of all the drivers along with their truck licence numbers.

In C:

#include <stdio.h>

void print_driver_with_truck_licence_number(struct Driver * driver) {
  /* Check may be redundant but better be safe than sorry */
  if (driver != NULL) {
    printf("driver %s has ", driver->name);
    if (driver->truck_licence != NULL) {
      printf("truck licence %04d-%04d-%08d\n",
        driver->truck_licence->area_code
        driver->truck_licence->year
        driver->truck_licence->num_in_year);
    } else {
      printf("no truck licence\n");
    }
  }
}

void print_drivers_with_truck_licence_numbers(struct Driver ** drivers, int nb) {
  if (drivers != NULL && nb >= 0) {
    int i;
    for (i = 0; i < nb; ++i) {
      struct Driver * driver = drivers[i];
      if (driver) {
        print_driver_with_truck_licence_number(driver);
      } else {
        /* Huh ? We got a null inside the array, meaning it probably got
           corrupt somehow, what do we do ? Ignore ? Assert ? */
      }
    }
  } else {
    /* Caller provided us with erroneous input, what do we do ?
       Ignore ? Assert ? */
  }
}

In OCaml that would be:

open Printf

(* Here we are guaranteed to have a driver instance *)
let print_driver_with_truck_licence_number driver =
  printf "driver %s has " driver.name;
  match driver.truck_licence with
    | None ->
        printf "no truck licence\n"
    | Some licence ->
        (* Here we are guaranteed to have a licence *)
        printf "truck licence %04d-%04d-%08d\n"
          licence.area_code
          licence.year
          licence.num_in_year

(* Here we are guaranteed to have a valid list of drivers *)
let print_drivers_with_truck_licence_numbers drivers =
  List.iter print_driver_with_truck_licence_number drivers

As you can see in this trivial example, there is nothing complicated in the safe version:

  • It's terser.
  • You get much better guarantees and no null check is required at all.
  • The compiler ensured that you correctly dealt with the option

Whereas in C, you could just have forgotten a null check and boom...

Note : these code samples where not compiled, but I hope you got the ideas.

bltxd
I've never tried it but http://en.wikipedia.org/wiki/Cyclone_%28programming_language%29 claims to allow non-null pointers for c.
Roman A. Taycher
I disagree with your statement that nobody is interested in the first case. Many people, especially those in the functional language communities, are extremely interested in this and either discourage or completely forbid the use of uninitialized variables.
Stephen Swensen
I believe `NULL` as in "reference that may not point to anything" was invented for some Algol language (Wikipedia agrees, see http://en.wikipedia.org/wiki/Null_pointer#Null_pointer). But of course it's likely that assembly programmers initialized their pointers to an invalid adress (read: Null = 0).
delnan
@Stephen: We probably meant the same thing. To me they discourage or forbid the use of uninitialized things precisely because there is no point discussing undefined things as we can't do anything sane or useful with them. It would have no interest whatsoever.
bltxd
@bltxd: cool, I suspected I was not quite understanding what you were trying to convey.
Stephen Swensen
-1. NULL is not "directly inherited from assembly"; there's generally nothing special about virtual address 0, and often nothing special about physical address 0. IIRC, it's even traditional to load your program to address 0, to the extent that many learn-C-in-21-days books speficially said that your program was loaded to address 0.
tc.
as @tc. says, null has nothing to do with assembly. In assembly, types are generally *not* nullable. A value loaded into a general-purpose register might be zero or it might be some non-zero integer. But it can never be null. Even if you load a memory address into a register, on most common architectures, there is no separate representation of the "null pointer". That's a concept introduced in higher-level languages, like C.
jalf
bltxd
@delnan - Null was added in Algol. John Backus, inventor of Algol, refers to it as his "billion dollar mistake"
Dave Griffith
+74  A: 

I think the succinct summary of why null is undesirable is that meaningless states should not be representable.

Suppose I'm modeling a door. It can be in one of three states: open, shut but unlocked, and shut and locked. Now I could model it along the lines of

class Door
    private bool isShut
    private bool isLocked

and it is clear how to map my three states into these two boolean variables. But this leaves a fourth, undesired state available: isShut==false && isLocked==true. Because the types I have selected as my representation admit this state, I must expend mental effort to ensure that the class never gets into this state (perhaps by explicitly coding an invariant). In contrast, if I were using a language with algebraic data types or checked enumerations that lets me define

type DoorState =
    | Open | ShutAndUnlocked | ShutAndLocked

then I could define

class Door
    private DoorState state

and there are no more worries. The type system will ensure that there are only three possible states for an instance of class Door to be in. This is what type systems are good at - explicitly ruling out a whole class of errors at compile-time.

The problem with null is that every reference type gets this extra state in its space that is typically undesired. A string variable could be any sequence of characters, or it could be this crazy extra null value that doesn't map into my problem domain. A Triangle object has three Points, which themselves have X and Y values, but unfortunately the Points or the Triangle itself might be this crazy null value that is meaningless to the graphing domain I'm working in. Etc.

When you do intend to model a possibly-non-existent value, then you should opt into it explicitly. If the way I intend to model people is that every Person has a FirstName and a LastName, but only some people have MiddleNames, then I would like to say something like

class Person
    private string FirstName
    private Option<string> MiddleName
    private string LastName

where string here is assumed to be a non-nullable type. Then there are no tricky invariants to establish and no unexpected NullReferenceExceptions when trying to compute the length of someone's name. The type system ensures that any code dealing with the MiddleName accounts for the possibility of it being None, whereas any code dealing with the FirstName can safely assume there is a value there.

So for example, using the type above, we could author this silly function:

let TotalNumCharsInPersonsName(p:Person) =
    let middleLen = match p.MiddleName with
                    | None -> 0
                    | Some(s) -> s.Length
    p.FirstName.Length + middleLen + p.LastName.Length

with no worries. In contrast, in a language with nullable references for types like string, then assuming

class Person
    private string FirstName
    private string MiddleName
    private string LastName

you end up authoring stuff like

let TotalNumCharsInPersonsName(p:Person) =
    p.FirstName.Length + p.MiddleName.Length + p.LastName.Length

which blows up if the incoming Person object does not have the invariant of everything being non-null, or

let TotalNumCharsInPersonsName(p:Person) =
    (if p.FirstName=null then 0 else p.FirstName.Length)
    + (if p.MiddleName=null then 0 else p.MiddleName.Length)
    + (if p.LastName=null then 0 else p.LastName.Length)

or maybe

let TotalNumCharsInPersonsName(p:Person) =
    p.FirstName.Length
    + (if p.MiddleName=null then 0 else p.MiddleName.Length)
    + p.LastName.Length

assuming that p ensures first/last are there but middle can be null, or maybe you do checks that throw different types of exceptions, or who knows what. All these crazy implementation choices and things to think about crop up because there's this stupid representable-value that you don't want or need.

Null typically adds needless complexity. Complexity is the enemy of all software, and you should strive to reduce complexity whenever reasonable.

(Note well that there is more complexity to even these simple examples. Even if a FirstName cannot be null, a string can represent "" (the empty string), which is probably also not a person name that we intend to model. As such, even with non-nullable strings, it still might be the case that we are "representing meaningless values". Again, you could choose to battle this either via invariants and conditional code at runtime, or by using the type system (e.g. to have a NonEmptyString type). The latter is perhaps ill-advised ("good" types are often "closed" over a set of common operations, and e.g. NonEmptyString is not closed over .SubString(0,0)), but it demonstrates more points in the design space. At the end of the day, in any given type system, there is some complexity it will be very good at getting rid of, and other complexity that is just intrinsically harder to get rid of. The key for this topic is that in nearly every type system, the change from "nullable references by default" to "non-nullable references by default" is nearly always a simple change that makes the type system a great deal better at battling complexity and ruling out certain types of errors and meaningless states. So it is pretty crazy that so many languages keep repeating this error again and again.)

Brian
+1, but note that not all people have last names ("given name" and "family name" seem to be more accurate anyway); I've heard of some people that have only one name (this means they don't fit into most data models). Arabic names can be especially complicated. Not all credit cards have 16 digits either.
tc.
Re: names - Indeed. And maybe you do care about modeling a door that is hanging open but with the lock deadbolt sticking out, preventing the door from shutting. There is lots of complexity in the world. The key is not to add _more_ complexity when implementing the mapping between "world states" and "program states" in your software.
Brian
By the way, for good reading on the topic of representation in software, I suggest the out-of-print "Abstraction and Specification in Program Development" (by Liskov, using the CLU language).
Brian
@Brian, thanks for fulfilling my promise to OP that someone from the functional programming / F# community would come through with a great answer!
Stephen Swensen
My point is that you're making assumptions about names that you should never have made. What's wrong with `String name, preferredName`? If you have multiple middle names, do you concatenate them with spaces? Why do spaces in the middle name count as a "character", while the spaces between names don't? What about Unicode combining characters?
tc.
What, you've never locked doors open?
Joshua
I don't understand why folks get worked up over the semantics of a particular domain. Brian represented the flaws with null in a concise and simple manner, yes he simplified the problem domain in his example by saying everyone has first and last names. The question was answered to a 'T', Brian - if you're ever in boston I owe you a beer for all the posting you do here!
akaphenom
@akaphenom: thanks, but note that not all people drink beer (I am a non-drinker). But I appreciate that you are just using a simplified model of the world in order to communicate gratitude, so I won't quibble more about the flawed assumptions of your world-model. :P (So much complexity in the real world! :) )
Brian
Strangely, there are 3-state-doors in this world! They are used in some hotels as toilet-doors. A push-button acts as a key from the inside, that locks the door from the outside. It is automatically unlocked, as soon as the latch bolt moves.
comonad
+24  A: 

The nice thing about option types isn't that they're optional. It is that all other types aren't.

Sometimes, we need to be able to represent a kind of "null" state. Sometimes we have to represent a "no value" option as well as the other possible values a variable may take. So a language that flat out disallows this is going to be a bit crippled.

But often, we don't need it, and allowing such a "null" state only leads to ambiguity and confusion: every time I access a reference type variable in .NET, I have to consider that it might be null.

Often, it will never actually be null, because the programmer structures the code so that it can never happen. But the compiler can't verify that, and every single time you see it, you have to ask yourself "can this be null? Do I need to check for null here?"

Ideally, in the many cases where null doesn't make sense, it shouldn't be allowed.

That's tricky to achieve in .NET, where nearly everything can be null. You have to rely on the author of the code you're calling to be 100% disciplined and consistent and have clearly documented what can and cannot be null, or you have to be paranoid and check everything.

However, if types aren't nullable by default, then you don't need to check whether or not they're null. You know they can never be null, because the compiler/type checker enforces that for you.

And then we just need a back door for the rare cases where we do need to handle a null state. Then an "option" type can be used. Then we allow null in the cases where we've made a conscious decision that we need to be able to represent the "no value" case, and in every other case, we know that the value will never be null.

As others have mentioned, in C# or Java for example, null can mean one of two things:

  1. the variable is uninitialized. This should, ideally, never happen. A variable shouldn't exist unless it is initialized.
  2. the variable contains some "optional" data: it needs to be able to represent the case where there is no data. This is sometimes necessary. Perhaps you're trying to find an object in a list, and you don't know in advance whether or not it's there. Then we need to be able to represent that "no object was found".

The second meaning has to be preserved, but the first one should be eliminated entirely. And even the second meaning should not be the default. It's something we can opt in to if and when we need it. But when we don't need something to be optional, we want the type checker to guarantee that it will never be null.

jalf
+10  A: 

Since people seem to be missing it: null is ambiguous.

Alice's date-of-birth is null. What does it mean?

Bob's date-of-death is null. What does that mean?

A "reasonable" interpretation might be that Alice's date-of-birth exists but is unknown, whereas Bob's date-of-death does not exist (Bob is still alive). But why did we get to different answers?


Another problem: null is an edge case.

  • Is null = null?
  • Is nan = nan?
  • Is inf = inf?
  • Is +0 = -0?
  • Is +0/0 = -0/0?

The answers usually "yes", "no", "yes", "yes", "no", "yes". Crazy "mathematicians" call NaN "nullity" and say it compares equal to itself. SQL treats nulls as not equal to anything (so they behave like NaNs). One wonders what happens when you try to store ±∞, ±0, and NaNs into the same database column (there are 253 NaNs, half of which are "negative").

To make matters worse, databases differ in how they treat NULL, and most of them aren't consistent (see NULL Handling in SQLite for an overview). It's pretty horrible.


And now for the obligatory story:

I recently designed a (sqlite3) database table with five columns a NOT NULL, b, id_a, id_b NOT NULL, timestamp. Because it's a generic schema designed to solve a generic problem for fairly arbitrary apps, there are two uniqueness constraints:

UNIQUE(a, b, id_a)
UNIQUE(a, b, id_b)

id_a only exists for compatibility with an existing app design (partly because I haven't come up with a better solution), and is not used in the new app. Because of the way NULL works in SQL, I can insert (1, 2, NULL, 3, t) and (1, 2, NULL, 4, t) and not violate the first uniqueness constraint (because (1, 2, NULL) != (1, 2, NULL)).

This works specifically because of how NULL works in a uniqueness constraint on most databases (presumably so it's easier to model "real-world" situations, e.g. no two people can have the same Social Security Number, but not all people have one).


FWIW, without first invoking undefined behaviour, C++ references cannot "point to" null, and it's not possible to construct a class with uninitialized reference member variables (if an exception is thrown, construction fails).

Sidenote: Occasionally you might want mutually-exclusive pointers (i.e. only one of them can be non-NULL), e.g. in a hypothetical iOS type DialogState = NotShown | ShowingActionSheet UIActionSheet | ShowingAlertView UIAlertView | Dismissed. Instead, I'm forced to do stuff like assert((bool)actionSheet + (bool)alertView == 1).

tc.
A: 

Vector languages can sometimes get away with not having a null.

The empty vector serves as a typed null in this case.

Joshua
I think I understand what you are talking about but could you list some examples? Especially of applying multiple functions to a possibly null value?
Roman A. Taycher
+1  A: 

Robert Nystrom offers a nice article here:

http://journal.stuffwithstuff.com/2010/08/23/void-null-maybe-and-nothing/

describing his thought process when adding support for absence and failure to his Magpie programming language.

Corbin March
A: 

Ive always looked at Null (or nil) as being the absence of a value

Sometimes you want this, sometimes you dont. It depends on the domain you are working with. If the absence is meaningful: no middle name, then your application can act accordingly. On the other hand if the null value should not be there: First name is null, then the developer gets the proverbial 2am phone call.

Ive also seen code overloaded and over-complicated with checks for null. To me this means one of 2 things:
a) a bug higher up in the application tree
b) bad/incomplete design



On the positive side - Null is probably one of the more useful notions for checking if somethig is absent, and languages without the concept of null will endup over-complicating things when its time to do data validation. In this case, if a new variable is not initialized, said languagues will usually set variables to an empty string, 0, or an empty collection. However if an empty string or 0 or empty collection are valid values for your application -- then you have a problem.

Sometimes this circumvented by inventing special/wierd values for fields to represent an uninitialized state. But then what happens when the special value is entered by a well-intentioned user? And lets not get into the mess this will make of data validation routines. If the language supported the null concept all the concerns vanish.

Jon
Hi @Jon, It's a bit hard following you here. I finally realized that by "special/weird" values you probably mean something like Javascript's 'undefined' or IEEE's 'NaN'. But besides that, you don't really address any of the questions the OP asked. And the statement that "Null is probably the most useful notion for checking if something is absent" is almost certainly wrong. Option types are a well-regarded, type-safe alternative to null.
Stephen Swensen
@Stephen - Actually looking back over my message, i think the whole 2nd half should be moved to a yet-to-be-asked question. But I still say null is very useful for checking to see if something is absent.
Jon
+7  A: 

All of the answers so far focus on why null is a bad thing, and how it's kinda handy if a language can guarantee that certain values will never be null.

They then go on to suggest that it would be a pretty neat idea if you enforce non-nullability for all values, which can be done if you add a concept like Option or Maybe to represent types that may not always have a defined value. This is the approach taken by Haskell.

It's all good stuff! But it doesn't preclude the use of explicitly nullable / non-null types to achieve the same effect. Why, then, is Option still a good thing? After all, Scala supports nullable values (is has to, so it can work with Java libraries) but supports Options as well.

Q. So what are the benefits beyond being able to remove nulls from a language entirely?

A. Composition

If you make a naive translation from null-aware code

def fullNameLength(p:Person) = {
  val middleLen =
    if (null == p.middleName)
      p.middleName.length
    else
      0
  p.firstName.length + middleLen + p.lastName.length
}

to option-aware code

def fullNameLength(p:Person) = {
  val middleLen = p.middleName match {
    case Some(x) => x.length
    case _ => 0
  }
  p.firstName.length + middleLen + p.lastName.length
}

there's not much difference! But it's also a terrible way to use Options... This approach is much cleaner:

def fullNameLength(p:Person) = {
  val middleLen = p.middleName map {_.length} getOrElse 0
  p.firstName.length + middleLen + p.lastName.length
}

Or even:

def fullNameLength(p:Person) =       
  p.firstName.length +
  p.middleName.map{length}.getOrElse(0) +
  p.lastName.length

When you start dealing with List of Options, it gets even better. Imagine that the List people is itself optional:

people flatMap(_ find (_.firstName == "joe")) map (fullNameLength)

How does this work?

//convert an Option[List[Person]] to an Option[S]
//where the function f takes a List[Person] and returns an S
people map f

//find a person named "Joe" in a List[Person].
//returns Some[Person], or None if "Joe" isn't in the list
validPeopleList find (_.firstName == "joe")

//returns None if people is None
//Some(None) if people is valid but doesn't contain Joe
//Some[Some[Person]] if Joe is found
people map (_ find (_.firstName == "joe")) 

//flatten it to return None if people is None or Joe isn't found
//Some[Person] if Joe is found
people flatMap (_ find (_.firstName == "joe")) 

//return Some(length) if the list isn't None and Joe is found
//otherwise return None
people flatMap (_ find (_.firstName == "joe")) map (fullNameLength)

The corresponding code with null checks (or even elvis ?: operators) would be painfully long. The real trick here is the flatMap operation, which allows for the nested comprehension of Options and collections in a way that nullable values can never achieve.

Kevin Wright
+1, this is a good point to emphasize. One addendum: over in Haskell-land, `flatMap` would be called `(>>=)`, that is, the "bind" operator for monads. That's right, Haskellers like `flatMap`ping things so much that we put it in our language's logo.
camccann
+1 Hopefully an expression of `Option<T>` would never, ever be null. Sadly, Scala is uhh, still linked to Java :-) (On the other hand, if Scala didn't play nice with Java, who would use it? O.o)
pst
Easy enough to do: 'List(null).headOption'. Note that this means a very different thing than a return value of 'None'
Kevin Wright
I gave you bounty since I really like what you said about composition, that other people didn't seem to mention.
Roman A. Taycher
+1  A: 

Microsoft Research has a fantastic project called

Spec#

that is a C# Extension with not-null type and some mechanism to checking your objects against being not null, although IMHO applying Design By Contract Principle may be more appropriate and more helpful for many troublesome situations caused by null references.

Jahangir Zinedine