tags:

views:

664

answers:

8

Should you always check parameters and throw exceptions in .NET when the parameters are not what you expected? E.g. null objects or empty strings?

I started doing this, but then thought that this will bloat my code a lot if it’s done on every single method. Should I check parameters for both private and public methods?

I end up throwing a lot of ArgumentNullException("name") exceptions even though the code handling the exception can’t really do anything different programmatically since there is not guarantee that "name" will not change in the future.

I assume this info is just helpful when viewing a log full of exception information?

Is it best practice to always “plain for the worst”.

+3  A: 

It depends on the consumer of your class/method. If it's all internal, I would say it's not as important. If you have unknown/3rd-party consumers, then yes you would want extensive checking.

John Sheehan
+1  A: 

My philosophy for this situation is to notify and keep on going (when applicable). The pseudo-code would be:

if value == not_valid then
#if DEBUG
  log failure
  value = a_safe_default_value
#elsif RELASE
  throw
#endif
end

This way you can easily iterate during development, and have users test your application without it being an act in frustration.

Robert Gould
+5  A: 

Nothing's worse than chasing down an 'object reference not set to an instance of an object' message. If your code is sufficiently complex, it's difficult to know what's failing - especially in production systems and especially if it's in a rare boundary condition. An explicit Exception goes a long way in troubleshooting those issues. It's a pain, but it's one of those things you won't regret if something bad does happen.

Corbin March
+12  A: 

My two cents: All your public methods should always check the validity of the parameters being passed in. This is typically called "programming by contract" and is a great way to catch errors quickly and avoid propagating them through your private functions which many would argue (including myself) shouldn't be unit tested directly. As far as throwing exceptions, if your function or program can't correct the error itself, it should throw an exception because it has been thrown into an invalid state.

IAmCodeMonkey
+6  A: 

For public methods, then yes: it is definitely necessary to check your arguments; for internal/private calls, Eric Lippert might categorise them as "boneheaded" (here); his advice is not to catch them... fix the code!

To avoid code bloat, you might want to consider AOP, for example postsharp. To illustrate, Jon Skeet has a postsharp attribute that does null-argument checking, here. Then (to quote his example), you can simply attribute the method:

[NullArgumentAspect("text")]
public static IEnumerable<char> GetAspectEnhancedEnumerable(string text)
{ /* text is automatically checked for null, and an ArgumentNullException thrown */ }

Another handy trick here might be extension methods; extension methods have the curious feature that you can call them on null instances... so you can do the following (where the use of generics instead of "object" is so you don't accidentally call it by boxing a value-type):

static void ThrowIfNull<T>(this T value, string name) where T : class
{
    if (value == null) throw new ArgumentNullException(name);
}
// ...
stream.ThrowIfNull("stream");

And you can do similar things with out-of-range, etc.

Marc Gravell
+1  A: 

The approach I take is to check arguments and throw exceptions on publicly visible members - any by publicly visible I mean outside assembly boundaries (so any public, protected or protected internal method on a public class. This is because you (typically) design an assembly to function as an autonomous unit so anything within the confines of the assembly should follow the rules of how to call anything else in there.

For any non-publicly visible members (i.e. internal or private members or classes) I use Debug.Assert to perform the checking instead. This way if any of the callers within the assembly violate the contract you find out immediately at development/testing time, but you have no performance overhead in the final deployed solution because these statements are removed in RELEASE builds.

Greg Beech
A: 

Well, it depends. If your code is going to pick up the null anyway, and throw an exception then, it probably makes more sense to ensure that you have sensible clean-up code. If it might not be detected otherwise, or clean-up may be long running, or there might be an out-of-process call (e.g. database) you are probably better off not trying to change the world incorrectly, then change it back.

Marcin
+1  A: 

Much of my experience is with relatively constrained systems, where code bloat of this kind is undesirable. So my own instinct is to either use debug-only assertions, or to leave it out entirely. You'd hope that any possible invalid inputs will occur during testing of the caller who gives you the bad values, so provided that you test in debug mode as well as release mode you'll see the diagnostics. Otherwise, you'll debug the crash when it eventually happens.

If code size and performance don't matter (and in almost all code a simple null or range check won't affect performance anyway), then the more asserts you leave in your code in release mode, the more chance you have of diagnosing faults without needing to re-create the bug in test mode. This can be a big time saver. In particular, if your product is a library then a significant proportion of "fault" reports are due to customer errors, so no amount of pre-release testing can prevent them occurring in the wild. The sooner you can prove to the customer that their code is wrong, the sooner they can fix it and you can get back to finding your own bugs.

Still, in C/C++ I find that the specific case of checking for null pointers is only a minor help. If someone passes you a pointer, then the full validity condition isn't "must not be null". It needs to point to memory which is readable (perhaps also writeable) by the current process out to a certain size, and contains the correct type of object, possibly in a certain subset of all possible states. It needs not to have been freed, not to have been trashed by a buffer overrun elsewhere, probably not to be concurrently modified by another thread, etc. You aren't going to test all that at method entry, so you can still miss invalid parameters. Anything which leads you or other programmers to think "this pointer isn't null so it must be valid", because you've tested only one small part of the validity condition, is misleading.

If you're passing by pointer at all, then you are already in territory where you need to trust the caller not to give you garbage. Rejecting one particular instance of garbage still leaves you trusting the caller not to give you any of the myriad other kinds of garbage they can conjure that are harder to detect. If you find that null pointers are a common kind of garbage from your particular callers, then by all means test for them, since it saves time diagnosing bugs elsewhere in the system. It depends on an assessment whether finding bugs in your caller's code with the symptom "passes a null pointer to me" is worth bloating your own code (perhaps in binary size, and certainly in source cruft): if such bugs are rare then you're probably wasting time and screen real estate checking for them.

Of course in some languages you don't pass by pointer and the caller has only limited opportunities to corrupt memory, so there's less scope for garbage. But in Java, for instance, passing the wrong object is still a more common programming error than passing an erroneous null. Nulls are in any case usually fairly easy to diagnose if you leave them to the runtime to spot, and look at the stacktrace. So the value of null-checking is quite limited even there. In C++ and C# you can use pass-by-reference where nulls would be prohibited.

The same goes for any other specific invalid input you might test for, and any language. Full pre- and post-condition testing (if possible), is of course another matter, since if you can test the entire call contract then you're on much firmer ground. And if you can use weaving or whatever to assert contracts without adding to source code of the function itself, that's even better.

Steve Jessop