tags:

views:

692

answers:

8

I've read a number of books and websites on the subject of TDD, and they all make a lot of sense, especially Kent Beck's book. However, when I try to do TDD myself, i find myself staring at the keyboard wondering how to begin. Is there a process you use? What is your thought process? How do you identify your first tests?

The majority of the books on the subject do a great job of describing what TDD is, but not how to practice TDD in real world non-trivial applications. How do you do TDD?

+3  A: 

I start with thinkig of requirements.

foreach UseCase

  1. analyze UseCase
  2. think of future classes
  3. write down test cases
  4. write tests
  5. testing and implementing classes (sometimes adding new tests if I missed sth at point 4).

That's it. It's pretty simple, but I think it's time consuming. I like it though and I stick to it. :)

If I have more time I try to model some sequential diagrams in Enterprise Architect.

rafek
A: 

I don't think you should really begin with TDD. Seriously, where are your specs? Have you agreed on a general/rough overall design for your system yet, that may be appropriate for your application? I know TDD and agile discourages Big Design Up-Front, but that doesn't mean that you shouldn't be doing Design Up-Front first before TDDing your way through implementing that design.

Jon Limjap
I didn't say begin with TDD, I said how do you do TDD. I have specs. I have conceptual designs. I have use cases. But when the coding comes around, where do you start? Sure, I could write a bunch of random tests for random objects, but that doesn't really bring any cohesion.
Its me
+3  A: 

I used to have the same problem. I used to start most development by starting a window-designer to create the UI for the first feature I wanted to implement. As the UI is one of the hardest things to test this way of working doesn't translate very well to TDD.

I found the atomic object papers on Presenter First very helpful. I still start by envisioning user actions that I want to implement (if you've got usecases that's a great way to start) and using a MVP or MVC-ish model I start with writing a test for the presenter of the first screen. By mocking up the view until the presenter works I can get started really fast this way. http://www.atomicobject.com/pages/Presenter+First here's more information on working this way.

If you're starting a project in a language or framework that's unknown to you or has many unknown you can start out doing a spike first. I often write unit tests for my spikes too but only to run the code I'm spiking. Doing the spike can give you some input on how to start your real project. Don't forget to throw away your spike when you start on your real project

Mendelt
Please could you explain what "doing a spike" means. I haven't come across the terminaology before. Presumably you mean a prototype?
Ben Aston
Spike comes from XP (extreme programming), a spike is smaller than a prototype (my spikes usually take half an hour to an hour). It means writing throwaway code to try out a small piece of technology. Because you're not writing production code you can do spikes without testing and refactoring.
Mendelt
+4  A: 

It's easier than you think, actually. You just use TDD on each individual class. Every public method that you have in the class should be tested for all possible outcomes. So the "proof of concept" TDD examples you see can also be used in a relatively large application which has many hundreds of classes.

Another TDD strategy you could use is simulating application test runs themselves, by encapsulating the main app behavior. For example, I have written a framework (in C++, but this should apply to any OO language) which represents an application. There are abstract classes for initialization, the main runloop, and shutting down. So my main() method looks something like this:

int main(int argc, char *argv[]) {
  int result = 0;

  myApp &mw = getApp(); // Singleton method to return main app instance
  if(mw.initialize(argc, argv) == kErrorNone) {
    result = mw.run();
  }

  mw.shutdown();
  return(result);
}

The advantage of doing this is twofold. First of all, all of the main application functionality can be compiled into a static library, which is then linked against both the test suite and this main.cpp stub file. Second, it means that I can simulate entire "runs" of the main application by creating arrays for argc & argv[], and then simulating what would happen in main(). We use this process to test lots of real-world functionality to make sure that the application generates exactly what it's supposed to do given a certain real-world corpus of input data and command-line arguments.

Now, you're probably wondering how this would change for an application which has a real GUI, web-based interface, or whatever. To that, I would simply say to use mock-ups to test these aspects of the program.

But in short, my advice boils down to this: break down your test cases to the smallest level, then start looking upwards. Eventually the test suite will throw them all together, and you'll end up with a reasonable level of automated test coverage.

Nik Reiman
+1  A: 

I agree that it is especially hard to bootstrap the process.

I usually try to think of the first set of tests like a movie script, and maybe only the first scene to the movie.

Actor1 tells Actor2 that the world is in trouble, Actor2 hands back a package, Actor1 unpacks the package, etc.

That is obviously a strange example, but I often find visualizing the interactions a nice way to get over that initial hump. There are other analogous techniques (User stories, RRC cards, etc.) that work well for larger groups, but it sounds like you are by yourself and may not need the extra overhead.

Also, I am sure the last thing that you want to do is read another book, but the guys at MockObjects.com have a book in early draft stages, currently titled Growing Object-Oriented Software, Guided by Tests. The chapters that are currently for review may give you some further insight in how to start TDD and continue it throughout.

jkl
Thanks for the recommendation. Bootstrapping is a good term.
Its me
+1  A: 

The problem is that you are looking at your keyboard wondering what tests you need to write.

Instead think of the code that you want to write, then find the first small part of that code, then try and think of the test that would force you to write that small bit of code.

In the beginning it helps to work in very small pieces. Even over the course of a single day you'll be working in larger chunks. But any time you get stuck just think of the smallest piece of code that you want to write next, then write the test for it.

Jeffrey Fredrick
A: 

Sometimes you don't know how to do TDD because your code isn't "test friendly" (easily testable).

Thanks to some good practices your classes can become easier to test in isolation, to achieve true unit testing.

I recently came across a blog by a Google employee, which describes how you can design your classes and methods so that they are easier to test.

Here is one of his recent talks which I recommand.

He insists on the fact that you have to separate business logic from object creation code (i.e. to avoid mixing logic with 'new' operator), by using the Dependency Injection pattern. He also explains how the Law of Demeter is important to testable code. He's mainly focused on Java code (and Guice) but his principles should apply to any language really.

Franck
Miško Hevery talk's are great, good recommendation.
Nazgob
A: 

The easiest is to start with a class that has no dependencies, a class that is used by other classes, but does not use another class. Then you should pick up a test, asking yourself "how would I know if this class (this method) is implemented correctly ?".

Then you could write a first test to interrogate your object when it's not initialized, it could return NULL, or throw an exception. Then you can initialize (perhaps only partially) your object, and test test it returns something valuable. Then you can add a test with another initialization value - should behaves the same way. At that time, I usually test an error condition - such as trying to initialize the object with a invalid value.

When you're done with the method, goes to another method of the same class until you're done with the whole class.

Then you could pick another class - either another independent class, or class that use the first class you've implemented.

If you go with a class that relies on your first class, I think it is acceptable to have your test environment - or your second class - instantiating the first class as it has be fully tested. When one test about the class fails, you should be able to determine in which class the problem lies.

Should you discover a problem in the first class, or ask whether it will behave correctly under some particular conditions, then write a new test.

If climbing up the dependencies you think that the tests you're writing are spanning over to many classes to be qualified as unit-tests, then you can use a mock object to isolate a class from the rest of the system.


If you already have your design - as you indicated in a comment in the answer from Jon LimJap, then you're not doing pure TDD since TDD is about using unit tests to let your design emerge.

That being said, not all shops allow strict TDD, and you have a design at hand, so let's use it and do TDD - albeit it would be better to say Test-First-Programming but that's not the point, as that's also how I started TDD.

philippe