views:

193

answers:

8

These days I'm used to checking every single precondition for every function since I got the habit from an OS programming course back at uni.

On the other hand, at the software engineering course we were taught that a common precondition should only be checked once, so for example, if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.

I do see the redundancy point, but I certainly feel it's safer to always check them, plus you don't have to keep track of where they were checked previously.

What's the best practice here?

+1  A: 

In my experience, it depends on your encapsulation. If the inner function is private then you can be pretty sure that its preconditions are set.

Guess it's all about the Law of Demeter, talk only to your friends and so on.

As a basis for best practise, if the call is public, you should check your inputs.

Russ C
A: 

The best practice is to always check them.

Will
*Your* best practice is to always check them. ;-)
joseph.ferris
+5  A: 

I have seen no "hard and fast" rule on how to check preconditions, but I generally treat it like method documentation. If it is publicly scoped, I assert that the preconditions are met. The logic behind this would be that the scope dictates that you are expecting consumption on a broader scale and with less influence.

Personally, the effort to put assertions around private methods is something I reserve for "mission critical" methods, which would basically be ones that either perform a critical task, are subject to external compliance requirements, or are non-recoverable in the event of an exception. These are largely "judgment calls".

The time saved can be reinvested in thorough unit and integration test enhancement to try and flush out these issues and puts the tooling in place to help enforce the quality of the input assertions as it would be consumed by a client, whether it is a class in your control or not.

joseph.ferris
+1  A: 

I think it depends on how the team is organized: check inputs which come from outside your team.

  • Check inputs from end-users
  • Check inputs from software components written by other teams
  • Trust inputs received from within your own component / within your own team.

The reason for this is for support and maintenance (i.e. bug-fixing): when there's a bug report, you want to be able as quickly as possible to know which component is at fault, i.e. which team to assign the bug to.

ChrisW
I think that makes a lot of sense. It very much depends on who you are coding for.
Paddy
+2  A: 

if a function is delegating to another function, the first function should check them but checking them again in the second one is redundant.

What if you change the way those functions call each other? Or you introduce new validation requirements in the second function, that the first one delegates to? I'd say it's safer to always check them.

luvieere
This is exactly what I thought and thus asked the question. Still I'm not so sure now after seeing the other answers. So far I think joseph ferris' approach makes a lot of sense.
Pin
@Pin It's a question of balance. Upon refactoring, some private functions may be moved and have their dependencies changed, in which case it's better not to check their preconditions, as those would be handled by the public methods that call them. On the other hand, formerly private methods may become public during, say, an API extending phase, so they would need their preconditions checked.
luvieere
+1  A: 

I think that best practice is to do these checks only if they're going to fail some day. If will help when you do the following.

Debugging

There's no point to check them when several private functions in one module, which has a single maintainer, exchange data. Of course, there are exceptions, especially if your language doesn't have a static type system, or your data are "stringly typed".

However, if you expose public API, one day, someone will fail your precondition. The further is the person that maintains the calling module from you (in organizational structure and in physical location), the more likely it will happen. And when it happens, a clear statement of precondition failure, with a specification where it happened, may save hours of debugging. The Law of Leaky Abstractions is still true...

QA

Precondition failure helps QA to debug their tests. If a unit-test for a module causes the module to yield precondition failure, it means that the test is incorrect, not your code. (Or, that your precondition check is incorrect, but that's less likely).

If one of the means to perform QA is static analysis, then precondition checks, if they have a specific notation (for example, only these checks use assert_precondition macro), will also help. In static analysis it's very important to distinguish incorrect input and source code errors.

Documentation

If you don't have much time to create documentation, you may make your code aid the text that accompanies it. Clear and visible precondition checks, which are perceived separate from the rest of the implementation, "document" possible inputs to some extent. (Another way to document your code this way is writing unit tests).

Pavel Shved
+1  A: 
Roger Pate
A: 

I've made the habit of distinguishing between checking and asserting the preconditions, depending (as people pointed out in the comments) on whether a call comes from the outside (unchecked exception, may happen) or the inside (assert, shouldn't happen).

Ideally, the production system won't have the penalty of the asserts, and you could even use a Design By Contract(TM)-like mechanism to discharge the assertions statically.

ShiDoiSi

related questions