views:

942

answers:

16

I've often found that when a programmer or the one assigning the task doesn't really understand how a solution could work, they kind of randomly add stuff until it works.

Examples:

Repainting a window which for some reason isn't painted as the programmer would like it:

Invalidate();
Revalidate();
ProcessMessages(); 
Update();    
Repaint();
Repaint();
ProcessMessages(); 
Repaint();

Over-cautiousness:

function test1(x: boolean) 
begin
  select case x
    true: // do something
    false: // do something else
    else
      raise Exception.Create("Invalid value.") // just to be sure
  end;
end;

function test2(x: Integer);
var
  y: Integer;
begin
  y = Abs(x);
  if y >= 0 then
  begin
    // do something
  end;
end;

Although especially the over-cautious coding practices lead to compiler warnings in most languages, I've actually seen all of the above in production code!

In most cases this kind of coding is defended by the programmer and/or the boss. The reasons always come down to this response:

  • Well, does it hurt if we double check? Better be safe than sorry!
  • It's defensive programming, didn't they teach that at the university?!

Unfortunately I'm out of good reasons not to do this, although I still believe this is really really bad style, which can have bad repercussions.

Are there hard facts I can present that this style has bad repercussions eventually?

EDIT: Thank you for the good suggestions to get rid of this style. But I'm still interested in reasons I can present to my co-workers to explain and possibly convince them, why this is bad and it is in their best interest not do be paranoid.

+28  A: 

Get them to write unit tests to cover each case.

Preet Sangha
+21  A: 

Force 100% branch coverage for unit tests, or build fails. He won't be able to get test1 to throw the exception, or evaluate condition in test2 to false.

maciejkow
100% branch coverage, that'll teach them :)
DR
yes, it's cruel, yet - feasible, as long as you don't have such "impossible" code.
maciejkow
100% code coverage does not guarantee that your code is 100% correct: http://stackoverflow.com/questions/555807/what-type-of-errors-could-my-code-still-contain-even-if-i-have-100-code-coverage/555824#555824
Anton Gogolev
Never said it. I just wrote that impossible code cannot be reached and will never get 100% coverage.
maciejkow
I think the idea was to teach them that they can't write test cases to cover code that cannot reasonably be reached, not to prove 100% correctness.
Ron Warholic
@Sid Farkus: exactly.
maciejkow
doing this is just a waste of time, instead, send a mail with a link to this stackoverflow question.
Reno
+7  A: 
Repaint();
Repaint();
ProcessMessages(); 
Repaint();

This is just terrible programming. Code reviews and training must be applied here.

Galwegian
It's what I call "Rubber Chicken" programming al la "Well, it worked while I was shaking this rubber chicken, so I know if I shake the rubber chicken it'll work". It's also treating symptoms, not the underlying problem, the problem they solved is "How do I work around a window painting problem", not "Fixing why the window isn't drawing properly" . . . really twists my melon . . .
Binary Worrier
A: 
  • If they are so worried something won't work first time round, then they should re-write that something until they're confident it WILL work.

  • If there is a reasonable chance that something won't work first time round, and there's nothing they can do to improve the chances of it working, then the correct solution is to use a try / catch - not just call it twice.

TomFromThePool
+5  A: 

Your point is correct, but as far as I have observed, such things are ALSO caused by lack of proper technical knowledge. I mean, I have came across code that's just plain stupid. How can someone write something like this -

private bool IsValid(bool isValid)
{
    if(isValid == true) return true;
    else if(isValid == false) return false;
}

Same is the case for both the examples you gave. The programmer MIGHT not be knowing what each function call does (in the first case) or what the basic fundamentals of a switch case are (in the second one). Isn't it?

Kirtan
I must admit not see anything so extreme but the evaluation of bool to true or false in languages where direct eval is available should make a coder use a meaningful name. Rather than if(X == true) if(someThingMeaningful), in fact I quite often create a temp var to make the logic clearer in if type statements.
Preet Sangha
The language is C#, it could have been done directly with a comparision of the `isValid` variable also.
Kirtan
I always assume unfinished refactoring has lead to this scenario rather than stupidity, but maybe I'm too nice ;)
mattmanser
+3  A: 
  1. Let them write Unit Test
  2. Introduce Code Reviews (maybe you can discuss such code snippets and show them to do it better)
  3. Introduce the DRY-rule (Don't repeat yourselfe)
Alexander
@2: I don't agree with the "show them to do it better". Code reviews should be an eye-to-eye meeting, not a master-apprentice meeting. At least if you'd like more than one code review session. It seems to me that the premise of the question is that the coders are in the defensive already; simply lecturing them on their bad coding habits won't do any good IMO.Instead the code reviews should be discussing what the consequences of their style is and if that's what they want. If it really is what they want, the code reviews in itself won't change their mind anyway.
Steen
Yes do the code review in a eye-to-eye session. Then pick you a few code snippets from your code reviews (you can anonymize them a bit) an discuss them in a team meeting,
Alexander
+10  A: 

Yes, this is a problem. At the very least, this type of programming makes the code hard to understand and maintain. If there is a real case that needs checking or catching, then it is often hard to see whether it is really tested. Even a simple task like stepping through the code with a debugger can become tedious.

I struggled with a junior developer who wrote code like this for a long time. I couldn't convince him with sane arguments that writing redundant checks or extra steps was not the same as defensive programming. Eventually, I found a solution:

I told the developer that performance was a priority for his part of the code. It turned out that the easiest way to quickly improve performance was to delete the extra checks and repeated reinitialisations ;-). This trick worked like-a-treat and he was soon trained out of his "defensive coding" habits!

Stewart
But now the he optimises before he needs to :-)
Preet Sangha
@Preet, You joke. But actually that is partly true ;-) That is *much* less of a problem though!
Stewart
+2  A: 

Reasons why over-cautiousness is bad include:

  1. It may make the code unnecessarily slow.
  2. It will make the code harder to understand, which is likely to mean that more time is spent and more errors are made when someone else has to maintain it.
  3. It is evidence that they really don't know what they are doing ... or they are too lazy to spend time thinking about and rewriting their code.

Bend the ear of your team leader / manager and see if you can get them to introduce code reviews and / or pair programming.

Stephen C
+10  A: 

Programming by coincidence http://www.pragprog.com/the-pragmatic-programmer/extracts/coincidence

Suppose Fred is given a programming assignment. Fred types in some code, tries it, and it seems to work. Fred types in some more code, tries it, and it still seems to work. After several weeks of coding this way, the program suddenly stops working, and after hours of trying to fix it, he still doesn’t know why. Fred may well spend a significant amount of time chasing this piece of code around without ever being able to fix it. No matter what he does, it just doesn’t ever seem to work right.

Fred doesn’t know why the code is failing because he didn’t know why it worked in the first place. It seemed to work, given the limited ``testing’’ that Fred did, but that was just a coincidence. Buoyed by false confidence, Fred charged ahead into oblivion. Now, most intelligent people may know someone like Fred, but we know better. We don’t rely on coincidences—do we?

+1  A: 

Play them at their own game:

  • Declare that every heap allocation should be put in a try..catch block to check for OutOfMemoryException errors - which is logged to disk/sent to a syslog server/etc.
  • Check every variable allocation just to make sure it "takes" - allocate twice if need be.
  • For loops shouldn't be trusted because once you saw one "miss a step" - so do all your loops using gotos.
  • Store SHA1 hashes of every string. Compare/update the hashes when changing the string values in "secure" parts of the software - to make sure the string isn't changed inadvertently.
  • Perform integer equality tests by casting to a float and compare using epsilon because you once heard of a story where a large value of 2 caused a major incident at [insert nearest nuclear power station here].

If they can't see that some of these tests are just a little bit crazy then there's no hope.

gridzbi
@gridzi: Actually, when I was an undergraduate, we learn programming in Fortran on a CDC 6400. The Fortran implementation had an interesting bug. If you called a subroutine with a call-by-reference parameter that updated the parameter, and you called it with a literal ( e.g. 1.0 ) argument, you would change the value of the literal for the entire program!!!
Stephen C
That happened in other implementations. A TRS-80 programmer (IIRC Lance Micklus), using Microsoft FORTRAN, passed "1.0" (or some such value) to a subroutine, which passed it to another, which changed it. He solved it by setting a variable to 1.0, and concluded that Murphy was right: "Constants aren't, and variables won't".
David Thornley
+2  A: 

This is a really annoying problem that I have faced several times. Trying to convince someone through presenting them with various principles or by arguing it out just proved frustrating and fruitless. In the end I took two approaches:

  • Undid there code when they went to lunch/home.
  • Changed tact and enthusiastically agreed with them - this often un-nerved them and made them think twice.
Nosrama
+2  A: 

The first example provided is a classic case of Programming by coincidence so there's your ammo against that one.

Case 2 and 3 are just silly in most contexts, unless they're test cases for some beta programming language or something in which the implementation of ABS and boolean may have undefined behaviour.

HollyStyles
+1  A: 

I wrote a bit about defensive programming a while ago:

http://www.francisfish.com/what_defensive_programming_is_and_isnt_logging_the_right_t.htm

I think the suggestions above about forcing people to test all the code paths is pretty valid and will work if they're human.

Ghoti
+1  A: 

Every line of code is an opportunity for bugs. Writing lines of code that don't influence the behavior of the program increases bugs without any benefit.

I would lie in wait until a bug crops up that is directly attributed to code like this, then argue my case again. Much easier with evidence in hand.

Joeri Sebrechts
+2  A: 

y = Abs(x); if y >= 0 then

is perfectly sensible. remember --> MIN_INT==abs(MIN_INT)

A: 

Steal half of thier RAM after hours. If it takes a minute or so between each slather of nonsense code and the failure message they get after they compile and run it, they can't help but think about why the problems are occuring.

quillbreaker