views:

358

answers:

11

How can you make sure that all developers on your team are unit testing their code? Code coverage metrics are the only way I can think of to objectively measure this. Is there another way?

(Of course if you're really following TDD then this shouldn't be an issue. But let's just suppose you've got some developers that don't quite "get" TDD yet.)

+6  A: 

Run test coverage reports automatically during your build process. We do this with CruiseControl. Otherwise, you have to actually inspect what is getting tested in your test results reports.

BC
Do you still have to inspect the tests to ensure the tests are actually valid?
Kibbee
A: 

Have them submit a report or some sort of screen shot of the results of their unit tests on a regular basis. They can either fake it (more likely would take more time than actually doing the tests) or actually do them.

In the end, you are going to know the ones who are not doing the tests, they will be the ones with the bugs that could have been easily caught with the unit tests.

Ryan Guill
Dumb da dumb dumb
Seventh Element
+1  A: 

Here we just have a test folder, with a package structure mirroring the actual code. To check in a class, policy states it must have an accompanying testing class, with certain guidelines about which/how each method needs to be tested. (Example: We don't require pure getters and setters to be tested)

A quick glance at the testing folder shows when a class is missing, and the offending person can be beaten with a rubber hose (or whatever depending on your policy).

Stephen Pape
So I can (1) write some buggy code, (2) try various tests until I find one that doesn't catch any of my bugs, and check that test in. Plausible deniability.
finnw
Sure, but then that rubber hose comes back when someone else reviews your code.
Stephen Pape
+1  A: 

Go in and change a random variable or pass a null somewhere and you should expect to see a bunch of red. =D

Mark Renouf
this isn't such a crazy idea. By creating known defects, you can get an estimate of how well your QA process is working by comparing the # of found known defects to #of actual known defects.
Scott Weinstein
There are fault and bug injection tools that could be used to automate this process if you wanted to.
Steve Rowe
A: 

The issue is as much social as it is technical. If you have developers who "don't quite 'get' TDD yet" then helping them understand the benefits of TDD may be a better long-term solution than technical measures that "make" them write tests because it's required. Reluctant developers can easily write tests that meet code coverage criteria and yet aren't valuable tests.

Abie
+10  A: 

This is probably a social problem rather than a technological one. Firstly, do you want unit tests that result in 100% code coverage, or can you settle for less, and trust your developers to put in unit tests where they really matter, and where they make sense. You could probably get some kind of system in place that would do code coverage tests to ensure that unit tests cover a certain percentage of code. But then there would still be ways to game the system. And it still wouldn't result in code that was bug free. Due to things like the halting problem, it's impossible to cover ever path in the code.

Kibbee
+1 I agree it doesn't make sense to force them with some rigid system, they would just game that system. You have to *explain* to them how important it is. That is harder to do than installing some enforcing mechanism, but certainly more effective.
+1  A: 

Code coverage tools are almost certainly superior to any ad hoc method you could come up with. That's why they exist.

Even developers who get TDD are far from immune to gaps in coverage. Often, code that fixes a bug breaks a lateral test or creates a branch that the original developer did not anticipate and the maintenance developer didn't realize was new.

Jekke
+2  A: 

A good way to get tests written is to increase accountability. If a developer has to explain to someone else exactly why they didn't write unit tests, they're more likely to do so. Most companies I've worked at have required that any proposed commit to a trunk be reviewed by another developer before the commit, and that the name of the reviewer be included in the commit comments. In this environment, you can tell your team that they should not allow code to "pass" peer code review unless unit tests are in place.

Now you have a chain of responsibility. If a developer commits code without naming the reviewer, you can ask them who reviewed the code (and, as I learned the hard way, having to say "nobody" to your boss when asked this question is no fun!). If you do become aware of code being committed without unit tests, you can ask both the developer and the code reviewer why unit tests were not included. The possibility of being asked this question encourages code reviewers to insist on unit tests.

One more step you can take is to install a commit hook in your version control system that e-mails the entire team when a commit is made, along with the files and even code that made up the commit. This provides a high level of transparency, and further encourages developers to "follow the rules." Of course, this only works if it scales to the number of commits your team does per day.

This is more of a psychological solution than a technical solution, but it's worked well for me when managing software teams. It's also a bit gentler than the rubber hose suggested in another answer. :-)

MattK
+1  A: 

One way to do it would be to write a script search all checkins for the term 'test' or 'testfixture' (obviously depending on your environment). If there is a commit log or an email sent out that to the manager that details changes made, then it'd be trivial with your favorite text processing language to scan the code files for signs of unit tests (the Assert keyword would probably be the best bet).

If there aren't unit tests, then during your next code review, take an example of a recent check-in, and spend ten minutes talking about the possible ways it could 'go wrong', and how unit tests would have found the error faster.

George Stocker
wtf? why would this get voted down?
Jeffrey Fredrick
I'm not really sure. I guess they're mad I didn't provide them with the outlook plugin or perl script to do just this (there are a myriad of ways of searching for the data, I provided two).
George Stocker
A: 

One thing that should be mentioned here is that the you need a system for regularly running the unit tests. They should be part of your checkin guantlet or nightly build system. Merely making sure the unit tests are written doesn't ensure you are getting value out of them.

Steve Rowe
A: 

Sit down with them and observe what they do. If they don't unit test, remind them gently.

Morendil