views:

51

answers:

2

Hi,

I was looking for some kind of a solution for software development teams which spend too much time handling unit test regression problems (about 30% of the time in my case!!!), i.e., dealing with unit tests which fails on a day to day basis.

Following is one solution I'm familiar with, which analyzes which of the latest code changes caused a certain unit test to fail:

Unit Test Regression Analysis Tool

I wanted to know if anyone knows similar tools so I can benchmark them. As well, if anyone can recommand another approach to handle this annoying problem.

Thanks at Advanced

+1  A: 

Test often, commit often.

If you don't do that already, I suggest to use a Continuous Integration tool, and ask/require the developers to run the automated tests before committing. At least a subset of the tests. If running all tests takes too long, then use a CI tools that spawns a build (which includes running all automated tests) for each commit, so you can easily see which commit broke the build.

If the automated tests are too fragile, maybe they don't test the functionality, but the implementation details? Sometimes testing the implementation details is a good idea, but it can be problematic.

Thomas Mueller
Hi,Sorry for the late response - I've been out of office. Thanks for your replies.
SpeeDev
Hi,Sorry for the late response - I've been out of office. Thanks for your replies.3. Yes. The tests are pretty unique. There is, of course, an infrastructure code which would run in most of the tests, but basically - we don't duplicate tests.4. We're testing often, but it doesn't always help. First of all, there are builds in which large change sets are being committed into the source control, and second, there are times when the build is not stable and compilation fails for several days, and then I'm getting the notification that the test failed a week after it really failed.
SpeeDev
5. Regarding running the automatic tests before each commit - un realistic in my (and I believe in most of the) situations, since running all the integration tests take very much time to run, and even if we would run it, still if the test fails on my machine I still need to figure out which if the code changes caused the unit test to fail - but you right that it should be easier.
SpeeDev
6. Regarding running a subset of most probable test to fail - since it's usually fails due to other team members (at least in my case), I need to ask others to run my test - which might be 'politically problematic' in some of the development environments ;).Any other suggestions will be appriciated.Thanks a lot
SpeeDev
In my view, the hole point of Continuous Integration is to find out the build is broken *right after* (or even before) committing. If you get a notification that the test failed a week after it really failed, then you definitely don't have Continuous Integration. Running automatic tests before commit - that should be the norm, and it is the norm where I work. Unfortunately it's not always possible to run all tests because that would take too long, but then you need to break the tests into quick and long running (which is only ran during the night). It also the norm to run other peoples tests.
Thomas Mueller
+2  A: 

You have our sympathy. It sounds like you have brittle test syndrome. Ideally, a single change to a unit test should only break a single test-- and it should be a real problem. Like I said, "ideally". But this type of behavior common and treatable.

I would recommend spending some time with the team doing some root cause analysis of why all these tests are breaking. Yep, there are some fancy tools that keep track of which tests fail most often, and which ones fail together. Some continuous integration servers have this built in. That's great. But I suspect if you just ask each other, you'll know. I've been though this and the team always just knows from their experience.

Anywho, a few other things I've seen that cause this:

  • Unit tests generally shouldn't depend on more than the class and method they are testing. Look for dependencies that have crept in. Make sure you're using dependency injection to make testing easier.
  • Are these truly unique tests? Or are they testing the same thing over and over? If they are always going to fail together, why not just remove all but one?
  • Many people favor integration over unit tests, since they get more coverage for their buck. But with these, a single change can break lots of tests. Maybe you're writing integration tests?
  • Perhaps they are all running through some common set-up code for lots of tests, causing them to break in unison. Maybe this can be mocked out to isolate behaviors.
ndp
Yup. There's no easy answer. Write better tests.
Jaco Pretorius
Hi,Sorry for the late response - I've been out of office.1. I wrote "unit tests" but ment both unit tests and integration tests. In deed, integration tests are the ones which usually fail.2. Regarding "fancy tools that keep track of which tests fail most often" - the tool I posted doesn't track which tests fail most ofter, but finds the code changes which caused the unit test to fail (watch the movie in the link above if you want to understand better how it works). It's the only one I found on the web, but of course, I'm open for other type of solutions and methodologies.
SpeeDev
Any other suggestions will be appriciated.Thanks a lot
SpeeDev
If you are not using mocking (stubbing, spies, etc.), incorporating these judiciously will help make your tests less brittle.
ndp