views:

120

answers:

6

I am looking for possibly a study between the time difference of regular coding vs coding + unit tests (not strict TDD just yet). I know the whole "Saves you time in the long run" angle, but from a project planning perspective for the team that has never done it before, I need to be able to roughly estimate how much additional time to allocate.

Does such a study exist? Can anyone comment from experience?

+2  A: 

I know that you're not interested in going for full TDD just yet, but I think the best performance and planning markers are going to be found in Test Driven Development case studies, such as those conducted by both Microsoft and IBM.

Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD. Source.

That is part of a preface to a full comparison of normal development vs development using unit testing as a core principle. The upfront development planning addition that we use is 20%, which includes both unit and integration tests. In all honesty testing is vital to any successful piece of software, unit testing just removes some of the hard yards and manual effort. The ability to run a few hundred unit and integration tests upon finishing a bit of functionality and being able to test the system within a few seconds is invaluable.

There are a number of different 'magic' numbers that people add to their estimates to incorporate the additional time to write unit tests, but it really just comes down to a simple trade-off. Essentially your balancing the increase in development time vs the increase in bug fixing/error checking time, also taking into account the likeyhood of downtime and the system criticality (if it is a primary revenue or a non-essential system).

If you’re interested in doing a little more reading here is the complete Microsoft study. Its pretty short but gives some interesting findings. And if you’re really keen here is a unit testing showcase as it were, explaining the concepts and benefits in reasonable detail.

JonVD
*There are a number of different 'magic' numbers that people add to their estimates* - yep, that was exactly the point i was trying to make, but you've summed it up nicely.
slugster
"Subjectively". You should have emphasized that. The quality is better and the most anyone could claim is a "subjective" impression that TDD had a cost. Of course it appears to have a cost. But that "subjective" impression should be questioned.
S.Lott
You do run the risk of having it seem like the project takes "longer" with TDD. So, it may be helpful to approach this from the other direction. Can you get numbers from previous projects the team has done regarding how much time was spent testing / fixing errors after development was "done". Especially time spent dealing with issues found by customers. That's time that probably wasn't part of the original estimate but should have been. TDD should save _total_ time spent - that's a key point to emphasize.
AngerClown
A: 

I wouldn't rely on a study to tell you that, you can just work it out yourself.

Normally a developer would break down major functionality into smaller tasks, and then they would estimate the time each task will take. As part of this task breakdown they can also evaluate which tasks (or pieces of functionality) are unit testable, and then at that point they can also log tasks and estimates to write those particular tests. The esitimates for the unit tests should have the same margin of error applied as the code they are testing - the more complex the tested code, the more complex the test (and therefore the more likely the estimate to write the test will go over schedule).

Of course none of this is a perfect science, if it was there would be no need for project managers :) If you are just looking for high level estimates, then any figure you give for the unit testing portion is going to be as imprecise as the figure you give for the actual code. You won't have any certainty or accuracy until you break it down into tasks, and then apply relevant scales to it (like you need to factor in a developer's true coding velocity, which might only be 60% - 70%).

slugster
+1  A: 

I can't comment on studies for this topic.

From experience I'd say your magic number range is 30-40% since your team is new to this. Your team will need to learn how to create mocks and fakes, and get used to writing tests, in addition to setting up infrastructure, lots of up-front costs until your team gets up to speed. If your main language is C++, then it takes more effort to write mocks and fakes, than with C# (from my experience). If your project is all brand new code, then it will take less effort than working with existing code. If you can get your team quickly up to speed on testing, than TDD will prove less effort than writing tests after the fact. With enough experience, the times for tests are probably around 20%, yet another magic number. Pardon my lack of exact numbers, I don't have the precise metrics here from my experience.

Chris O
+2  A: 

In all state of affairs, in all teams this should be right:

TimeOf(Coding+Testing) < TimeOf(CodingWithoutTesting)

It should't take additional time at all. Or it'll become useless.

Chaotic_one
Unless you will use the code once, then throw it away while writing the "newer, better" code. This is unfortunately quite common...
bukzor
@bukzor: Even then, the cost of TDD is lower than the cost of no TDD. Software that you test actually works, and doesn't cause further operational problems. Software that you didn't test never works quite right and causes no end of problems. When you throw it away, that's a cost, and an indication that the first version wasn't very good.
S.Lott
@bukzor: If you're trying to describe programs that are used without being tested, then I have no clear concept of what you're talking about. You can't have a "project" to create code without testing. Testing is a first-class part of coding. If you're trying to describe a "project" that produces "untested" code, you're not really talking about software development, or costs or schedules or anything related to that. You're talking about randomly creating code. Any veneer of project management -- with no testing -- is some kind of elaborate lie.
S.Lott
@bukzor: I'm a little lost. I don't get the CS focus. My point is simply this. One-time throw-away code with no testing is not really the kind of thing this question is asking about. I have a hard time understanding how it can be even considered against software that will be tested. You mentioned that one-time code is somehow important. If we include that, why not include code which never went to production, code which was abandoned before completion and code which was never properly started and things like that?
S.Lott
+1  A: 

As someone who is currently working on his first project using unit tests (not full-blown TDD, but close), I would say that you should double the time it would usually take for your team to do their initial implementation.

Roy Osherove's book, The Art of Unit Testing, has an informal study that shows this, but also shows that when QA cycles are included, the overall release time was slightly lower when using unit tests, and there were far fewer defects in the code developed with unit tests.

I would make these suggestions:

  • Have your programmers read everything they can get on unit tests and TDD, especially anything they can find on how to design code that is test friendly (use of dependency injection, interfaces, etc.). Osherove's book would be a great start.

  • Start evaluating unit testing frameworks (NUnit, xUnit, etc.), and Mocking frameworks (Rhino Mocks, Moq, etc.), and have them choose ones to standardize on. The most popular are probably NUnit and Rhino Mocks, but I chose xUnit and Moq.

  • Don't let your programmers give up. It can be tough to change your mindset and overcome the natural resistance to change, but they need to work through that. Initially it may feel like unit tests just get in the way, but the first time they refactor sections of code on the fly and use unit tests to know they didn't break anything as opposed to hoping they didn't will be a revelation.

  • Finally, if possible, don't start unit tests on a large, high-pressure project with a tight deadline; this would likely lead to them not overcoming their initial difficulties and ditching unit tests in favor of just getting the code done.

Hope this helps!

adrift
A: 

I need to be able to roughly estimate how much additional time to allocate.

That's silly. There's no additional time.

Here's what you do. Take your existing testing budget. The 40% of development effort at the end of the project.

Spread most of this testing effort through the life of the project as unit testing. Call it 30% of the total effort, allocated everywhere.

Leave some of the testing effort at the end for "integration" and "performance" testing. Call it 10% of the total effort, allocated just at the end and just for integration and performance testing. Or User Acceptance Testing or whatever is left over that you didn't unit test.

There's no "additional". You have to do the testing anyway. You can either do it as first-class software, during the project, or you can scramble around at the end of the project doing a bad job.


The "cost" of TDD is -- at best -- a subjective impression. Read the excellent study summary carefully. "Subjectively, the teams experienced a 15–35% increase in initial development time after adopting TDD". [Emphasis added.]

Actually there is zero cost to TDD. TDD simply saves time.

It's much less expensive to do TDD than it is to create software other ways.

Why?

  1. You have to test anyway. Since you have to test, you may as well plan for testing by driving all of your development around the testing.

  2. When you have a broken test case, you have a focused set of tasks. When you are interrupted by phone calls, meetings, product demos, operational calls to support previous releases, it's easy to get distracted. When you have tests that fail, you get focus back immediately.

  3. You have to test anyway. It's either Code then Test or it's Test then Code. You can't escape the cost of testing. Since you can't escape, do it first.

  4. The idea that TDD has "additional" or "incremental" cost is crazy. Even a well-documented study like http://www.springerlink.com/content/q91566748q234325/ can't -- actually -- compare the same project done two ways. The idea of "additional development time" cannot actually be measured. That's why it's a "subjective impression". And they're simply wrong about it.

When you ask programmers -- who are new to TDD -- if TDD slowed them down, they lie about about it.

Yes. They Lie.

One. You made them change. Change is bad. Everyone knows that. They can't say it was easier, because it was "different".

Two. It calls management into question. We can't say bad things about our managers, that's bad. Everyone knows that. They can't say it was easier because previous things our managers demanded are now obviously wrong.

Three. Most programmers (and managers) think that there's a distinction between "real" code and "test" code. TDD takes longer to get to the "real" code because you spend your up-front time doing "test" code.

This "real" code vs. "test" code is a false distinction. Anyone who says this doesn't get how important testing is. Since testing is central to demonstrating that an application works, test code should be first-class. Making this distinction is wrong. Test code is real code.

The time spent writing test code is -- effectively -- time taken away from design of the real code. Rather than create "paper" designs, you are creating a working, living design in the form of test cases.

TDD saves time.

Folks who says otherwise are resisting change and/or trying to protect management from appearing to be wrong and/or making a false distinction between real code and test code. All things that are simply wrong.

S.Lott