tags:

views:

476

answers:

8

I don't know if you read the recent post by Joel, but basically he says that you don't need unit testing for every unit or piece of code in your code base.

"And that strikes me as being just a little bit too doctrinaire about something that you may not need. Like, the whole idea of agile programming is not to do things before you need them, but to page-fault them in as needed." -- Joel

And I would say that most developers don't have full coverage anyway, yet there are some blogs out there touting the 100% code coverage.

"Yeah. In fact, if you're making any kind of API, a plug in API, it is very important to separate things into interfaces and be very very contractual, and tightly engineered" -- Joel

"Right. The longer I think about this, the less I care about code hygiene issues (laughs). Because the unit of measure that really matters to me is, how quickly you can respond to real business needs with your code." -- Jeff

On Kent Beck:

"It doesn't seem like you could actually get any code written if you're spending all your time writing 8,000,000 unit tests, and every single dinky little class that you need to split a URL into four parts becomes an engineering project worthy of making a bridge, where you spend six months defining 1000 little interfaces" -- Joel.

My question is this, do you agree with Joel? Do you spend a lot of time writing unit tests with full coverage.

He is my stance. I believe in more BDD, Behavior Driven Development. For me, I prototype a piece of code, establish some setup code and then execute a bunch of these driver routines. If something goes wrong, an exception or unexpected behavior then that is flagged as fail and otherwise a pass.

The junit style testing:

assertEquals(a, b);

Just seems so unnatural for a large majority of developer testing. Don't get me wrong, I can see where junit style testing is a good idea. But for a majority of business web applications. I can't imagine trying to retrofit a unit test to every piece of code in your application.

I was looking at scheme lisp implementations in Java. Scheme has a well known set of standards and a lot implementations attempt to be scheme compliant. So a junit test for scheme compliance is a good idea. You could do something like this.

assertEquals("3", "(+ 1 2)")

If it fails, the scheme implementation is not compliant, a very useful unit testing metric.

For something like a web application, you might have an action method 'process' that does a bunch of stuff. Connects to a database, sets up html form variables. On and on. How are you going to unit test that. You can and I am sure most will, but it will be so unnatural and won't keep up with the pace of the project. I can see where backend code (whatever it is) could be tested but the plug and play MVC web apps (think Spring MVC or Struts), I just don't see full test coverage, especially the junit kind.

Edited - To Charlie:

Let me get a little bit more specific. This is an example from wikipedia on unit testing, this is a good example and use of the 'junit' approach. The goal of the adder is to properly add two integers together. This might work as a good test of a interpreter.

public class TestAdder {
    public void testSum() {
     Adder adder = new AdderImpl();
     assertTrue(adder.add(1, 1) == 2);
     assertTrue(adder.add(1, 2) == 3);
     assertTrue(adder.add(2, 2) == 4);
     assertTrue(adder.add(0, 0) == 0);
     assertTrue(adder.add(-1, -2) == -3);
     assertTrue(adder.add(-1, 1) == 0);
     assertTrue(adder.add(1234, 988) == 2222);
    }
}

But, lets say you are writing a business web application. The ultimate goal is to display a user's bank account information to them. All bank transactions, their savings account information, their credit card info, render it to the browser.

Ideally, like the adder application, I guess you could do:

assertTrue(getPage() == displayedCorrectly())

I guess that is at a higher level of granularity. That 'getPage' might include database calls to the bank account table. Or invoking the server page and sending over the form values. Invoking the web framework and make sure the right parameters are established. Etc, etc. What 'unit' do you test in that case? And I read some TDD articles where you normally are not supposed to write tests that includes code where you make a remote connection (like a database server), write to a file, etc.

Well? if you can't connect to the bank database because you hard coded the wrong database name, seems like a very valid test to check for?

I am not implying anything, these are just points where I am confused on how to about unit testing for more complex apps.

+3  A: 

I suppose it depends on how long a piece of rope you want.

On the one hand, TDD is properly understood as a design or specification activity: you're defining what has to be true for the code to work. In general, it doesn't appear empirically to have a lot of cost impact -- the extra effort, such as it is, is more than compensated by the reduction in debug time. What's more, by keeping the TDD-driven unit tests around, you're constantly retesting to see if you've done anything to violate the assumptions you made, weeks or months ago.

On the other hand, the program is eventually going to need to do the long-term large scale behaviors; you need the larger scale tests as well.

It's worth remembering when you read any pundit (including me) that they are, to some extent, selling something. Kent Beck can be kind of doctrinaire about his methods; Joel seems to be somewhat similar.

The real answer, I suspect, is somewhere in the middle: there is a degree of unit testing that's too much, but that TDD an unit testing improves things significantly when well used. That doesn't free you from the need to specify a behavioral contract, and control your interfaces.

Added

Okay. There are a couple of points here. The first one is that you want to use TDD tests that tell you something interesting; there's a limit to the number of those tests. In your case, you wouldn't want to write all of those tests unless you believed, for some reason, that the case of add(2,2) was going to be significantly different from the case of add(1,2). Since the simplest thing that could work pretty quickly will be to define int add(int a, int b){ return a+b; }, the return on investment on those tests rapidly diminishes.

The second point is that you do indeed run into the issue of how to test more complicated components. There are a number of approaches, and lots of tools for it. Possible approaches include creating mock objects to simulate a complex resource like a data base, and testing tools that interpret the output of some component in a sophisticated way, like, eg, HTTPUnit or RSpec. My experience is that as you get farther and farther up the chain, the actual testing is more like "behavioral testing" than it is like "unit testing." So, for example, if you have a method intended to render a complex page, you might have unit tests for components that, say, render one cell in a table, and use some other kind of test for the whole page, working up to, of course, the acceptance test with the customer that precedes them signing the check.

I think the biggest point here is that you don't want to be consumed by the methodology. Hamming says in the frontispiece of his numerical analysis text that "the purpose of computing is insight, not numbers."

Similarly, the purpose of testing is to increase your confidence in the implementation, not simply to have large numbers of tests, or even necessarily to prove you have 100 percent test coverage (unless that's for contractual reasons.)

In your example, the add(2,1) case gives you some confidence that the basic operation works; an add(0,-1) case might give you confidence that a particular potential edge case works correctly; going on from there with add(2,2),add(2,3),add(3,2) and so on don't increase your confidence particularly.

So the answer to your question may be "what tests and testing approaches give you the most confidence at the least incremental cost?" That's not TDD or BDD, that's thinking about the goals involved.

Charlie Martin
Thanks.And I want to say that I some believe that 'junit'/assert testing IS unit testing and TDD.For example, when I encountered TDD a while a go, that is the first thing people bring out is junit. As I get more experienced, junit (and variants) can't be applied to across the board to TDD.
Berlin Brown
I know it is strange, but some of us were a little confused about what TDD really means. I am not entirely picking on junit, but I am sure that is the first thing people think of when you start talking TDD. (junit, nunit).
Berlin Brown
+1  A: 

The definition of BDD is much closer to that of TDD than it is to your description: "I prototype a piece of code, establish some setup code and then execute a bunch of these driver routines".

BDD is more or less what you call "JUnit style testing", wrapped in a somewhat different terminology, syntax and set of concerns. It still comes down to writing assertions that compare expected results to actual.

Morendil
+2  A: 

To answer your second question, I prefer to use TDD, for any project exceeding about 20 lines of code.

There is no "retrofit" required at any point, because you write tests before writing the corresponding code.

In TDD there is no goal of any percentage of code coverage; rather the process, in competent hands, happens to yield a) high coverage, b) clean design, c) code that is robust against defects and regressions.

Morendil
+3  A: 

Uncle Bob Martin had this to say about that podcast.

JUnit is Gamma's and Beck's best idea that day of how to write a testing framework for Java. I don't see how this conflicts with BDD.

100% coverage? I don't know anyone who aims for that. I'd say 70% or higher where it makes sense is more sensible.

I'm not religious about TDD - I don't write the tests before the classes.

I prefer JUnit to drivers per class because I can ask Ant or IntelliJ to run ALL the tests at once. If I do something that causes a previous test to break I find out right away.

duffymo
If you're not writing the tests first, you're not doing TDD, you're using JUnit as a test framework. This isn't *bad* but it's not TDD.
Charlie Martin
So be it. Thanks, Charlie.
duffymo
A: 

I think the real issue that Joel (and it's been in the back of my mind for a while) is that many unit tests have to be written because our mainstream languages/compilers are not rich enough to enforce the developer's intentions.

Pre/Post conditions (design by contract) that can be checked with a compiler/static-code-analysis.

I'm anxious to see how the Code Contracts / PEX / etc projects can help us get to a place with compile-time checking, high code-coverage, without a significant amount of hand-managed test code.

Giorgio Galante
+2  A: 
Charlie Martin
A: 

Charlie's answers are so complete that I really have only have a few small things to add.

You say that you find Assert.AreEqual-type tests unnatural but I don't understand why. I use AreEqual, in particular, all the time and they fit very neatly with a contractual style of programming. Don't take this the wrong way but I wonder whether you might actually be suffering a bit of a problem in code factoring. With an inappropriate code architecture, you may well find it quite difficult to use JUnit/NUnit style testing.

A few other points:

  1. I don't really seek 100% coverage although, in theory, I'd usually like more coverage rather than less.

  2. The idea that you should not unit testing external connections sounds bizarre to me. This is one of the primary things I use Unit Testing for! Yes, you have to set up a harness (web services) or a database with a known state (database) but this is the only way that you'll know that your underlying data source is being transformed correctly by your business logic. How do you test specific methods or modules supporting your business logic otherwise?

Mark Brittingham
A: 

I don't quite get your definition of "JUnit style unit testing" and BDD. You can do BDD with JUnit, even though JUnit's vocabulary and ways to organize tests are somewhat limited compared to testing frameworks designed for BDD (I use JDave for Java).

Behaviour Driven Development (BDD) is about specifying the behaviour of the system by writing exectable specifications in the form of tests. When there is a thing that the code does not yet do, but it should do, then you write a test/spec for that thing, which in turn helps you to design code that passes the specification. For example to specify a ball, you might have specs (test methods) such as "the ball is round", "when nobody touches the ball then it stays in place", "when somebody hits the ball then it moves in the opposite direction", "when the ball hits the floor then it bounces" etc.

You can read my longer description about BDD and links to more information here: http://stackoverflow.com/questions/507000/writing-standards-for-unit-testing/510996#510996

Regarding the code coverage issue, when I use TDD/BDD, I typically end up with a 90-95% code coverage without any effort. Those 5-10% lines are mostly things such as catching checked exceptions which should never happen under normal circumstances, and private constructors of utility classes which should never be instantiated. I don't see any value in writing tests for them.

The point of TDD is to drive the design of the system, so if writing a test does not produce new code or verify a possible corner case, then writing the test does not produce value. The purpose is not to make sure that the system is 100% correct (formal verification is for that), although lower bug counts are a nice side benefit of using TDD.

(Also, you should not listen too much about what Joel and Jeff say about TDD and unit testing in that podcast, since obviously they don't have personal experience about using TDD. See Uncle Bob's response to it.)

Esko Luontola