A: 

I see where you are coming from. I actually like the use of NULL.

You make very good points, and if you try to debug the code you'd most likely catch it almost instantaneously with a decent debugger. I've also found that most of my NULL pointer errors revolve around functions from forgetting to check the return of the functions of string.h, where NULL is used as an indicator.

Sid McAwesome
+1  A: 

I like the use of NULL. Its been a while since I've worked in C++, but it made it very easy to find my problems regarding return values and references and fix them.

Heather
+2  A: 

I admit that I haven't really read a lot about Spec#, but I had understood that the NonNullable was essentially an attribute that you put on a parameter, not necessarily on a variable declaration; Turn your example into something like:

class Class { ... }

void DoSomething(Class c)
{
    if (c == null) return;
    for(int i = 0; i < c.count; ++i) { ... }
}

void main() {
    Class c = nullptr;
    // ... ... ... code ...
    DoSomething(c);
}

With Spec#, you are marking doSomething to say "the parameter c cannot be null". That seems like a good feature to have to me, as it means I don't need the first line in the DoSomething() method (which is an easy to forget line, and completely meaningless to the context of DoSomething()).

Chris Shaffer
+3  A: 

I don't understand your example. If your "= new Class()" is just a placeholder in place of not having null, then it's (to my eyes) obviously a bug. If it's not, then the real bug is that the "..." didn't set its contents correctly, which is exactly the same in both cases.

An exception that shows you that you forgot to initialize c will tell you at what point it's not initialized, but not where it should have been initialized. Similarly, a missed loop will (implicitly) tell you where it needed to have a nonzero .count, but not what should have been done or where. I don't see either one as being any easier on the programmer.

I don't think the point of "no nulls" is to simply do a textual find-and-replace and make them all into empty instances. That's obviously useless. The point is to structure your code so your variables are never in a state where they point to useless/incorrect values, of which NULL is simply the most common.

Ken
So if a language implements non-nullable types, what does a statement like "Class c;" set c to? (I totally agree with your comments by the way)
zildjohn01
My first thought: it shouldn't be allowed. Logically it makes no sense to say "make a slot which must store a Foo (but leave it empty)". This probably requires everything-is-an-expression (like Lisp or Ruby) -- I don't know how it'd work if you had to exec sequential statements to do assignment.
Ken
Looking back: think SQL. If you have a "NOT NULL" column and try to insert a record without specifying a non-null value for it, it simply raises an error.
Ken
@ken, so in that case, on every assignment there will be code generated to ensure we're not setting it to null and raise an exception at that point. Is that truly more desirable than say checking against null manually at a key point and as a programmer making an informed decision as to the utility of raising an exception, or accounting for it in the flow of the program?
Jason D
A: 

As I see it there are two areas where null is used.

The first is the absence of a value. For example, a boolean can be true or false, or the user hasn't yet chosen a setting, hence null. This is useful and a good thing, but perhaps has been implemented incorrectly originally and there is now an attempt to formalise that use. (Should there be a second boolean to hold the set/unset state, or null as part of a three-state logic?)

The second is in the null pointer sense. This is more often than not a program error situation, ie. an exception. It is not an intended state, there is a program error. This should be under the umbrella of formal exceptions, as implemented in modern languages. That is, a NullException being caught via a try/catch block.

So, which of these are you interested in?

Conor OG
#2 .
zildjohn01
A: 

The idea of non-null types is to let the compiler rather than your client find bugs. Suppose you add to your language two type specifiers @nullable (may be null) and @nonnull (never null) (I'm using the Java annotation syntax).

When you define a function, you annotate its arguments. For instance, the following code will compile

int f(@nullable Foo foo) {
  if (foo == null) 
    return 0;
  return foo.size();
}

Even though foo may be null at the entry, the flow of control guarantees that, when you call foo.size(), foo is nonnull.

But if you remove the check for null, you will get a compile-time error.

The following will compile too because foo is nonnull at the entry:

int g(@nonnull Foo foo) {
  return foo.size(); // OK
}

However, you won't be able to call g with a nullable pointer:

@nullable Foo foo;
g(foo); // compiler error!

The compiler does flow analysis for every function, so it can detect when @nullable becomes @nonnull (for instance, inside an if statement that checks for null). It will also accept a @nonnull veriable definition provided it is immediately initialized.

@nonnull Foo foo = new Foo();

There's much more on this topic in my blog.

Bartosz Milewski
A: 

I'm currently working on this topic in C#. .NET has Nullable for value types, but the inverse feature doesn't exist for reference types.

I created NotNullable for reference types, and moved the problem from if's (no more checks for null) to the datatype domain. This makes the application to throw exceptions in runtime and not in compile-time.

bloparod
+2  A: 

Its a little odd that the response marked "answer" in this thread actually highlights the problem with null in the first place, namely:

I've also found that most of my NULL pointer errors revolve around functions from forgetting to check the return of the functions of string.h, where NULL is used as an indicator.

Wouldn't it be nice if the compiler could catch these kinds of errors at compile time, instead of runtime?

If you've used an ML-like language (SML, OCaml, SML, and F# to some extent) or Haskell, reference types are non-nullable. Instead, you represent a "null" value by wrapping it an option type. In this way, you actually change the return type of a function if it can return null as a legal value. So, let's say I wanted to pull a user out of the database:

let findUser username =
    let recordset = executeQuery("select * from users where username = @username")
    if recordset.getCount() > 0 then
        let user = initUser(recordset)
        Some(user)
    else
        None

Find user has the type val findUser : string -> user option, so the return type of the function actually tells you that it can return a null value. To consume the code, you need to handle both the Some and None cases:

match findUser "Juliet Thunderwitch" with
| Some x -> print_endline "Juliet exists in database"
| None -> print_endline "Juliet not in database"

If you don't handle both cases, the code won't even compile. So the type-system guarantees that you'll never get a null-reference exception, and it guarantees that you always handle nulls. And if a function returns user, its guaranteed to be an actual instance of an object. Awesomeness.

Now we see the problem in the OP's sample code:

class Class { ... }

void main() {
    Class c = new Class(); // set to new Class() by default
    // ... ... ... code ...
    for(int i = 0; i < c.count; ++i) { ... }
}

Initialized and uninitialized objects have the same datatype, you can't tell the difference between them. Occasionally, the null object pattern can be useful, but the code above demonstrates that the compiler has no way to determine whether you're using your types correctly.

Juliet