views:

256

answers:

7

Hi,

I've been working on an ASP.NET MVC project for about 8 months now. For the most part I've been using TDD, some aspects were covered by unit tests only after I had written the actual code. In total the project pretty has good test coverage.

I'm quite pleased with the results so far. Refactoring really is much easier and my tests have helped me uncover quite a few bugs even before I ran my software the first time. Also, I have developed more sophisticated fakes and helpers to help me minimize the testing code.

However, what I don't really like is the fact that I frequently find myself having to update existing unit tests to account for refactorings I made to the software. Refactoring the software is now quick and painless, but refactoring my unit tests is quite boring and tedious. In fact the cost of maintaining my unit tests is higher than the cost of writing them in the first place.

I am wondering whether I might be doing something wrong or if this relation of cost of test development vs. test maintenance is normal. I've already tried to write as many tests as possible so that these cover my user stories instead of systematically covering my object's interface as suggested in this blog article.

Also, do you have any further tips on how to write TDD tests so that refactoring breaks as few tests as possible?

Edit: As Henning and tvanfosson correctly remarked, it's usually the setup part that is most expensive to write and maintain. Broken tests are (in my experience) usually a result of a refactoring to the domain model that is not compatible with the setup part of those tests.

+1  A: 

What I think he means is that it is the setup part that is quite tedious to maintain. We're having the exact same problem, especially when we introduce new dependecies, split dependecies, or otherwise change how the code is supposed to be used.

For the most part, when I write and maintain unit tests, I spend my time in writing the setup/arrange code. In many of our tests we have the exact same setup code, and we've sometimes used private helper methods to do the actual setup, but with different values.

However, that isn't a really good thing, because we still have to create all those values in every test. So, we are now looking into writing our tests in a more specification/BDD style, which should help to reduce the setup code, and therefore the amount of time spent in maintaining the tests. A few resources you can check out is http://elegantcode.com/2009/12/22/specifications/, and BDD style of testing with MSpec http://elegantcode.com/2009/07/05/mspec-take-2/

Henning
A: 

You might be writing your unit tests too close to your classes. What you should do is to test public APIs. When I mean public APIs, I don't mean public methods on all your classes, I mean your public controllers.

By having your tests mimicking how a user would interact with your controller part without ever touching your model classes or helper function directly, you allow yourself to refactor your code without having to refactor your tests. Of course, sometimes even your public API changes and then you'll still have to change your tests, but that will happen way less often.

The downside of this approach is that you'll often have to go through complex controller setup just to test a new tiny helper function you want to introduce, but I think that in the end, it's worth it. Moreover, you'll end up organizing your test code in a smarter way, making that setup code easier to write.

Virgil Dupras
Your answer is correct, but it summarizes the blog article I already quoted...
Adrian Grigore
Then you must be doing something wrong, because no, you're not supposed to constantly be refactoring your tests when you refactor your code.
Virgil Dupras
I didn't say constantly. But I did find that the cost of maintenance is exceeded by the cost of creation of tdd tests. Is that different in your experience?
Adrian Grigore
(did you mean "cost of creation is exceeded by cost of maintenance"? you seem to have mixed them up) In my experience, it's not the case. I seldom have to refactor my tests because the way my public controllers work together seldom change. When I add a new feature, it usually doesn't affect the rest of the code. When I have to refactor stuff, it's usually under the blanket of my public controllers' API.
Virgil Dupras
+1  A: 

This article helped me a lot: http://msdn.microsoft.com/en-us/magazine/cc163665.aspx

On the other hand, there's no miracle method to avoid refactoring unit tests.

Everything comes with a price, and that's especially true if you want to do unit testing.

Gerrie Schenck
Thanks for the article, I'll have a look. I am of course aware that everything comes at a price, I was wondering why people complain about unit tests being too tedious to write in those discussions on whether TDD really makes sense, whereas I spend much more time maintaining them than writing them.
Adrian Grigore
+1  A: 

Most of the time I see such refactorings affecting the set up of the unit test, frequently involving adding dependencies or changing expectations on these dependencies. These dependencies may be introduced by later features but affect earlier tests. In these cases I've found it to be very useful to refactor the set up code so that it is shared by multiple tests (parameterized so that it can be flexibly configured). Then when I need to make a change for a new feature that affects the set up, I only need to refactor the tests in a single place.

tvanfosson
I've also done something similar by adding fake object factories. I also have a base test class which takes care of dependency injection. Sometimes I also have a setup method shared by multiple tests. I suppose that's what you mean by refactoring the set up code?
Adrian Grigore
+4  A: 

This is a well-known problem that can be addressed by writing tests according to best practices. These practices are described in the excellent xUnit Test Patterns. This book describes test smells that lead to unmaintanable tests, as well as provide guidance on how to write maintanable unit tests.

After having followed those patterns for a long time, I wrote AutoFixture which is an open source library that encapsulates a lot of those core patterns.

It works as a Test Data Builder, but can also be wired up to work as an Auto-Mocking container and do many other strange and wonderful things.

It helps a lot with regards to maintainance because it raises the abstraction level of writing a test considerably. Tests become a lot more declarative because you can state that you want an instance of a certain type instead of explicitly writing how it is created.

Imagine that you have a a class with this constructor signature

public MyClass(Foo foo, Bar bar, Sgryt sgryt)

As long as AutoFixture can resolve all the constructor arguments, you can simply create a new instance like this:

var sut = fixture.CreateAnonymous<MyClass>();

The major benefit is that if you decide to refactor the MyClass constructor, no tests break because AutoFixture will figure it out for you.

That's just a glimpse of what AutoFixture can do. It's a stand-alone library, so it will work with your unit testing framework of choice.

Mark Seemann
Thanks a lot, I'll have a look!
Adrian Grigore
A: 

Two area's that I focus on when I start to feel the refactor pain around setup are making my unit tests more specific and my method/class's smaller. Essentially I find I am getting away from SOLID / SRP. Or I have tests that are trying to do to much.

It is worth noting that I do try and stay away from BDD/context spec the further from the UI I get. Testing a behavior is great, but always leads me (perhaps I am not doing it right?) to bigger messier tests, with more context specification than I like.

Another way I have seen this to happen to me is as code debit, creeping into methods that grow their business logic over time. Of course there are always big methods and class with multiple dependencies, but the less I have the less 'test rewrite' I have.

MarcLawrence
A: 

If you find yourself creating complicated test scaffolding involving deep object graphs like Russian dolls, consider refactoring your code so that the Class Under Test gets exactly what it needs in its constructor/arguments, rather than having it walk the graph.

intead of:

public class A {

   public void foo(B b) {
      String someField = b.getC().getD().getSomeField();
      // ...
   }
} 

Change it to:

public class A {

   public void foo(String someField) {
      // ...
   }
} 

Then your test setup becomes trivial.

Caffeine Coma