views:

326

answers:

7

I have a number of projects in a solution file that have unit tests written for them and I am wanting to set them up to be run by our continuous integration server. However, because many of the tests have been written poorly and have not been run regularly there are many that are failing.

I don't have the time at the moment to fix all of the tests but I do believe there is value in having the existing tests run. What is the best way do deal with the failing Unit Tests?

What I am currently doing is marking each failing test as Explicit and leaving a TODO comment.

[Test, Explicit] //TODO: Rewrite this test because it fails

Is there a better way of doing this? Or should I fix all the tests before including them in the tests that are run by the CIS?

+6  A: 

Well, in NUnit you have the option to ignore tests using the ignore attribute:

[Test, Ignore("Test needs rewrite")]

Personally though, there are two things that I do with such tests:

  • Delete them if I don't understand the test, or if the test is out of date/out of synch with current specs
  • Refactor it to the correct spec if the fix is trivial

Gleaning from what you've written I would suspect that many of those failing tests are out of date and may not be relevant in the first place, so I think it would be fine to delete them.

There's no point in keeping a test that nobody understands anyway.

UPDATE: Oren Eini has a blog post which outlines most of how I feel about activating old, failing tests:

The tests has no value by themselves: My most successful project didn't have any tests

To quote:

Tests are a tool, and its usage should be evaluated against the usual metrics before applying it in a project. There are many reasons not to use tests, but most of them boil down to: "They add friction to the process".

If retrofitting old, failing tests adds friction to the process, maybe it isn't worth updating them at all.

Jon Limjap
You can also include a comment in [IgnoreAttribute], which makes them more useful still.
Roger Lipscombe
Roger, thanks for reminding me about that... edited the answer to use the ignore comment :)
Jon Limjap
I'm voting this one the answer cause it helped me the most. Having the comment in the ignore attribute really helps. I also liked krosenvold and Jon Skeet's answers too. At the moment I'll just flag the tests and notify those higher up about them before I remove or replace them.
mezoid
+8  A: 

Since you have a running automated build (with test failure notification!), this sounds like it's time for 5-a-day (freely from ubuntu community):

In each test method that's failing insert the following (pseudocode):

if  ( DateTime.now < new DateTime(2008, 12, 24, 11, 00, 00)) return;

For every 5 times you insert this statement you advance tha date by one working day. Set the time-of-day at some time when you're likely to have time to fix the test.

When the working day arrives, you fix it or delete it.

krosenvold
+1  A: 

You are doing a good job setting up a continuous integration server running all your tests.

But what are disabled tests good for? They are like code commented out. Dead tests. As Jon said: Make them run or delete them. Often it is better to write new ones if they are poorly written as you say.

But when will you have time to fix them? The tests are the only safety net, a software developer has when going further. You need to take the time or you will pay for it later. But maybe it would be take less time to write new tests...

Arne Burmeister
+1  A: 

What would you do with some other code that has accumulated technical debt?

If doing TDD (test first), Unit Tests do two things for you. One is to help design your objects with low coupling and high cohesion. These tests no longer are doing jack shit for you. The second, allows you to refactor your code without changing behavior.

It sounds like your failing tests are now an opportunity cost. In other words there are no longer adding value to your project. Just costing you money and time. Look at the time you spent wondering what to do with them? The tests are no longer valid.

IMHO, I would delete the tests. They are no longer covering the code, such that If you refactor the code the tests do not protect behavior. It's like having comments in your code that has changed, but the comments were never updated.

If you do delete the tests you will need to treat the code that was supposedly covered by the tests as "Legacy" (Feather's Definition).

Gutzofter
+3  A: 

I disagree with the idea of just deleting the test. If it looks like it should work but it doesn't, that's important information. If a test is mostly okay, but there's something environmental which is causing it to fail (e.g. reading a local file which is now in a different place) then it can easily provide value again when you find time to fix it. However:

  • Add a comment explaining why the test fails If this is due to a bug elsewhere, a bug ID etc is useful. Make sure you provide enough information to come back to it in a year's time.
  • Have some tool (it can probably be really simple - grep!) generate a report on a weekly basis so you don't forget about the test.
  • If possible, find some way of automatically running the ignored tests periodically, just to check whether they still fail. A test which was failing and magically starts to work can give very important information (in conjunction with source history, of course).
Jon Skeet
+2  A: 

I don't have the time at the moment to fix all of the tests

I think you have something backward here...

If you really consider the tests to have value, then with respect I'd suggest you don't have time not to fix them. Right now, they're telling you that the software either doesn't do what it's supposed to do or that the tests are checking for something that no longer applies. Either way it's an indication that the process is broken somewhere.

So, particularly given the time of year and unless you have month- or year-end issues, I'd be making time to clean up either my tests or my code or both.

Seriously, what's the point of having tests if you don't listen to what they're telling you? And why bother running continuous integration if you can't trust what it does?

Mike Woodhouse
A: 

and how can you be sure the tests aren't failing because something is wrong?

HLGEM