views:

78

answers:

4

This is probably a stupid question but my googling isn't finding a satisfactory answer. I'm starting a small project in C#, with just a business layer and a data access layer - strangely, the UI will come later, and I have very little (read:no) concept / control over what it will look like.

I would like to try TDD for this project. I'm using Visual Studio 2008 (soon to be 2010), I have ReSharper 5, and nUnit.

Again, I want to do Test-Driven Development, but not necessarily the entire XP system. My question is - when and where do I write the first unit test?

Do I only test logic before I write it, or do I test everything? It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...but it seems like the "No new code without a failing test" maxim requires this.

Links or references are fine (but preferably to online resources, not books - I would like to get started ASAP).

Thanks in advance for any guidance!

+2  A: 

It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...

It is...There's no logic in an empty constructor or auto-property to test.

when do I write the first unit test?

Before you write your first line of testable code. Think about the behavior you want your method to perform. Write your test based on the desired behavior. That test (and all the others that follow) embody your program's specifications.

where do I write the first unit test?

In the first Test Class you create.

This is probably the best online resource:

Introduction to Test Driven Design (TDD)
http://www.agiledata.org/essays/tdd.html

Robert Harvey
Okay. So it's not writing a test to check the code I've already planned to write...it's writing a test, sort of, AS the plan for the code I'm about to write?
Joel
Yes, essentially. After you write the test, then you write the code to pass the test, the test remains as a *regression test* to prove that the method still meets the requirement, in case you refactor. Also, *there can be more than one test* that specifies the overall requirements for the method.
Robert Harvey
+2  A: 

There is a shift in mindset that you have to dive into with TDD. If you're not doing TDD, you'd usually write some code, then write a unit test to make sure the code does what you expect, and handles a few different corner cases. With TDD, you actually write a test first, that typically uses classes and methods that you haven't even written yet.

Once you write your test, and you're satisfied that the test is a good example of how your code should be used, you start to write the actual classes and method to make the test pass.

It's kind of hard at first, because you won't have intellisense help, and your code won't build until you actually implement the production code, but by writing the test first, you are forced to think about how your code will be used before you even write it.

Andy White
+1  A: 

It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...

It may seem counter productive, but if you have code that depends upon the defualt state of your newly constructed objects, then it's worthwhile testing. Someone can come in and change the default values for fields, and your code breaks.

It can be helpful to remember that your tests are not just for finding things that fail, but verifying that your code lives up to its advertised contract. You could argue that you don't need to worry - if the contract breaks then the code depending on that will also break. This is true, but creates tests that fail from "remote" problems. Ideally, a problem with a class should cause failure in it's own unit test first, and rather than in the unit tests of it's clients.

Doing TDD without any kind of requirements or design to work from is hard. With traditional coding where you sit down and bang out something that does what you want, the requirements evolve as the code evolves - requirements almost come from the code, as you dig deeper and discover in more detail what you need. But with TDD, the tests are embodiments of the requirements, and so you have to have these up front, crystal clear in your mind. If you're starting from an empty sheet, then you're having to requirements analysis, test design, and code design all at once.

mdma
+1  A: 

One way to bootstrap TDD is to write an integration test first -- that is, before any unit tests. This integration test is oriented toward proving that your application works as expected in an end-to-end sense.

Obviously, the application is not written yet, so your initial integration test would not check very many things. For example, suppose you were writing a data-crunching program that is supposed to analyze a flat file and produce a summary report. A simple end-to-end test would invoke the program and then confirm that a file was produced in the expected location. That test will fail because your app doesn't do anything yet.

Then you write a minimal version of your app to satisfy the simple integration test. For example, the app will run and write a hard-coded title of the summary report to the file. In its current form, your application is sometimes referred to as a walking skeleton -- the thinnest possible slice of somewhat realistic functionality.

From that point, you add meat to the skeleton -- of course, writing a test before each new bit of functionality is written. Once you start doing this, the "What to test first" question becomes a bit more tractable and many of your new tests will be unit-oriented (rather that integration-oriented).

FM