In previous large-scale applications requiring high robustness and long up-times, I've always been for validating a pointer function argument when it was documented as "must never be NULL". I'd then throw an std::invalid_argument
exception, or similar, if the argument actually was NULL in C++ and return an error code in C.
However, I'm starting to think that maybe it's better to just let the application blow up immediately at the first NULL pointer dereference in that same function - then the crash dump file would reveal what happened - and let a thorough testing process find the bad function calls.
One problem with not checking for NULL and letting the application blow up, is that if the pointer isn't actually dereferenced in that function, but rather stored for later use, then the dereferencing blow-up will be out of context and much harder to diagnose.
Any thoughts or best practices on this out there?
Edit 1 : I forgot to mention that much of our code are libraries for 3rd party developers that may or may not know about our internal error handling policies. But the functions are still documented properly!