I would use an assertion if null pointers are not allowed. If you throw an exception for null pointers, you effectively allow them as arguments, because you specify behavior for such arguments. If you don't allow null pointers but you still get them, then some code around definitely has a bug. So in my opinion it does not make sense to "handle" it at some higher levels.
Either you want to allow callers to pass null pointers and handle this case by throwing an exception and let the caller react properly (or let the exception propagate, as the caller wishes), or you don't allow null pointers and assert
them, possibly crashing in release mode (undefined behavior) or use a designated assertion macro that is still active in release mode. The latter philosophy is taken by functions such as strlen
, while the former philosophy is taken by functions such as vector<>::at
. The latter function explicitly dictates the behavior for out-of-bound values, while the former simply declares behavior undefined for a null pointer being passed.
In the end, how would you "handle" null pointers anyway?
try {
process(data);
} catch(NullPointerException &e) {
process(getNonNullData());
}
That's plain ugly, in my opinion. If you assert in the function that pointers are null, such code becomes
if(!data) {
process(getNonNullData());
} else {
process(data);
}
I think this is far superior, as it doesn't use exceptions for control flow (supplying a non-NULL source as argument). If you don't handle the exception, then you could aswell fail already with an assertion in process
, which will directly point you to the file and line number the crash occurred at (and with a debugger, you can actually get a stack trace).
In my applications, i always take the assert
route. My philosophy is that null pointer arguments should be handled completely by non-exceptional paths, or asserted to be non-NULL.