I've been trying to jump on the TDD bandwagon for some time now, and it's been going well except for one crucial thing, normally what I end up doing is Test After Development.
I need a mental shift and am wondering how did you force yourself to write tests first?
views:
505answers:
9The mental shift for me wad realizing that TDD is about design, not testing. TDD alows you to reason critically about the API of the thing you're constructing. Write the tests first and it's often very clear what API is most convenient and appropriate. Then write you implementation.
Of course you should be writing tests after too (regrssion tests, integration tests, etc.). TDD often produces good design, but not necessarily good testing code.
It helps if you have a generic test framework.
Have a library of generic functions applicable to various sorts of tests you run. Then re-use those as building blocks to build tests for the project you're on.
To get there, note the common things you do in the tests you write after. Abstract them away into generalized library one by one.
Doing so will enable you to do many simpler tests very quickly easily by not having to re-do the boring and time consuming test driver code, instead concentrating on actual test cases.
Do "test as documentation" approach. Don't add/change any wording in documentation not backed up by appropriate tests.
This saves time - you don't have to re-parse documentation/requirements another tijme just to build the tests later - as well as helps with mental shift you asked about.
Do gradual phase-in - add tests for new features/changes as they are being started to work in.
Nobody likes to change their ways cold turkey - human nature. Let the good habit slip in and eventually it becomes second nature.
Right away budget the time for writing tests at the beginning of your development schedule on a project
This will both force you into proper habits (assuming you tend to follow your project plan) and protect you from running over due dates due to "extra" time spent building tests.
Of course, the "extra" time for TDD ends up net time saver, but it is not always realized at the very beginning stage of the project, which puts negative pressure on the TDD practice ("Where are the prototype screenshots??? What do you mean you're still writing tests???").
Also, try to follow the usual recommended practices of small one-purpose classes and functions. This - among all the other benefit - allows much easier unit test writing. Couple that with #2 (by writing unit test as part of the API documentation, when designing the API), and your API designs "magically" improve since you start noticing weak points immediately due to writing the tests based on them. As other people noted, using some sort of Dependency Injection pattern/framework helps simplify building the tests.
Once i started leveraging dependency injection, my classes became smaller and more specialized which allowed me to write simple unit tests to confirm they worked. Given the limited number of tests i knew my class had to pass to work, the goal of my TDD effort became more clear. It was also easier to identify which classes required integration tests due to dependencies on external resources and which classed required unit tests that injected mock/stub/fake objects into the SUT to narrow the focus of my test.
What helped me instill habitual discipline, was before making any change, say to myself, "What test do I write to demonstrate that the change worked?". While not strictly TDD (since the focus is quite small) it brought testing to the forefront of my thinking whenever the system was changed.
Start small, with a low barrier to entry, practice daily and the habit becomes second nature. After a while your scope for thinking about tests naturally widens to include design, and well as integration and system testing.
I found that "start small" worked well on legacy projects that had little unit testing in place and where the intertia to bring it up to scratch was too large so that no-one bothered. Small changes, bugfixes etc. could often be easily unit tested even when the test landscape for the whole project was pretty barren.
For me it was all about realizing the benefits. Fixing bugs after the fact is so much harder than never writing them in the first place.
the easiest way to get start, imo, is to start with it on a new component. TDD, and effective unit testing in general, require that you architect your code in a way that allows for testing without dependencies (meaning you need to have interfaces for mock object implementations, etc). in any complex piece of software this has real implications on the structure of your code.
A big moment came for me with TDD when I read a certain quote (I can't recall where) that the moment of triumph for a test is the moment when the test changes from red to green.
This is probably all stuff you've read before, but unless I start with a failing test, and it becomes a passed test, that's when I reap the huge psychological benefits. It feels good to change from red to green. And if you are consistent with delivering that moment to yourself, it becomes addictive, and then much easier to make yourself do.
But the trick for me was to isolate that moment and revel in it.
Pair Programming
I realize that this may not be an option for everyone, and that many devs don't like this idea. But I've found that if I pair program with someone who's also committed to TDD, we tend to "keep each other honest" and stay with TDD much more than I could programming alone by sheer will.
As a solo developer, one thing that helped me make the shift to TDD was setting a code coverage threshold for myself.
In my build script I used a code coverage tool (NCover) to determine the percentage of code that was covered by tests, and initially set the threshold at 80%. If I stopped writing my tests first the coverage percentage would fall below the 80% threshold, and my build script would cause a failure. I would then immediately slap myself on the wrist and write the missing tests.
I gradually increased the code coverage threshold and eventually became a full TDD convert.
Our TDD drives the development, from the name. Best learned from someone that already is extreme/disciplined about it. If your velocity affected applying TDD immediately on the work project, what is stopping you from growing your TDD muscles outside of work on a side project?
Here's a repost of how I became a BDD / TDD convert:
A year ago, I had little idea how to do TDD (but really wanted to (how frustrating)) and had never heard of BDD... now I do both compulsively. I have been in a .Net development environment, not Java, but I even replaced the "F5 - Run" button with a macro to either run Cucumber (BDD) or MBUnit (TDD) depending if it is a Feature/Scenario or Specification. No debugger if at all possible. $1 in the jar if you use the debugger (JOKING (sort of)).
The process is very awesome. The framework we are additionally using is by The Oracle I've been blessed to come across, and absorbing information from, and that framework he/we use is MavenThought.
Everything starts with BDD. Our BDD is straight up cucumber ontop of iron ruby.
Feature:
Scenario: ....
Given I do blah...
When I do something else...
Then wonderful things happen...
Scenario: ...
And that's not unit testing itself, but it drives the feature, scenario by scenario, and in turn the unit (test) specifications.. So you start on a scenario, and with each step you need to complete in the scenario it drives your TDD.
And the TDD we have been using is kind of BDD in a way, because we look at the behaviours the SUT (System Under Test) requires and one behaviour is specified per specification (class "test" file).
Example:
Here is the Specification for one behaviour: When the System Under Test is created.
There is one more specification (C# When_blah_happens class file) for another behaviour when a property changes, but that is separated out into a another file.
using MavenThought.Commons.Testing;
using SharpTestsEx;
namespace Price.Displacement.Module.Designer.Tests.Model.Observers
{
/// <summary>
/// Specification when diffuser observer is created
/// </summary>
[ConstructorSpecification]
public class When_diffuser_observer_is_created
: DiffuserObserverSpecification
{
/// <summary>
/// Checks the diffuser injection
/// </summary>
[It]
public void Should_return_the_injected_diffuser()
{
Sut.Diffuser.Should().Be.SameInstanceAs(this.ConcreteDiffuser);
}
}
}
This is probably the simplest behaviour for a SUT, because in this case when it is created, the Diffuser property should be the same as the injected diffuser. I had to use a Concrete Diffuser instead of a Mock because in this case the Diffuser is a Core/Domain object and has no property notification for the interface. 95% of the time we refer to all our dependencies like Dep(), instead of injecting the real thing.
Often we have more than one [It] Should_do_xyz(), and sometimes a fair bit of setup like perhaps upto 10 lines of stubbing. This is just a very simple example with no GivenThat() or AndGivenThatAfterCreated() in that specification.
For setup of each specification we generally only ever need to override a couple methods of the specification:
GivenThat() ==> this happens before the SUT is created.
CreatSut() ==> We auto mock creation of the sut with StructureMap and 90% of time never need to override this, but if you are constructor injecting a Concrete, you have to override this.
AndGivenThatAfterCreated() => this happens after the SUT is created.
WhenIRun() => unless it is a [ConstructorSpecification] we use this to run ONE line of code that is the behaviour we are specifiying for the SUT
Also, if there is common behaviour of two or more specifications of the same SUT, we move that into the base specifcation.
All I gotta do to run the Specification is highlight it's name, example "When_diffuser_observer_is_created" and press F5, because remember, for me F5 runs a Rake task either test:feature[tag] if Cucumber, or test:class[SUT]. Makes sense to me because everytime you run the debugger it's a throw away, no code is created (oh and it costs a $1 (joking)).
This is a very, very clean way of specifying behaviour with TDD and having really, really simple SUTs and simple specifications. If you try and be cowboy coder and write the SUT crappy with hard dependencies, etc, you will feel the pain of trying to do TDD and get fed up / give up OR bite the bullet and do it right.
And here's the actual SUT. We got a little fancy and use PostSharp to add property notify changed on the Diffuser, so hence the Post.Cast<>. And again, that's why I injected a Concrete rather than Mock. Anyway, as you can see the missing behaviour defined in another specification is when anything changes on the Diffuser.
using System.ComponentModel;
using MavenThought.Commons.Events;
using PostSharp;
using Price.Displacement.Core.Products;
using Price.Displacement.Domain;
namespace Price.Displacement.Desktop.Module.Designer.Model.Observers
{
/// <summary>
/// Implementation of current observer for the selected product
/// </summary>
public class DiffuserObserver : AbstractNotifyPropertyChanged, IDiffuserObserver
{
/// <summary>
/// gets the diffuser
/// </summary>
public IDiffuser Diffuser { get; private set; }
/// <summary>
/// Initialize with a diffuser
/// </summary>
/// <param name="diffuser">The diffuser to observe</param>
public void Initialize(IDiffuser diffuser)
{
this.Diffuser = diffuser;
this.NotifyInterface().PropertyChanged += (x, e) => this.OnPropertyChanged(e.PropertyName);
}
/// <summary>
/// Gets the notify interface to use
/// </summary>
/// <returns>The instance of notify property changed interface</returns>
protected INotifyPropertyChanged NotifyInterface()
{
return Post.Cast<Diffuser, INotifyPropertyChanged>((Diffuser)Diffuser);
}
}
}
In conclusion, this BDD / TDD style of development rocks. It took one year but I am a total convert as a way of life. I would not have learned this on my own. I picked up everything from The Oracle http://orthocoders.com/.
Red or Blue pill, the choice is yours.