views:

70

answers:

4

Does anyone know of a tool that can help determine which unit tests should be run based on the diffs from a commit?

For example, assume a developer commits something that only changes one line of code. Now, assume that I have 1000 unit tests, with code coverage data for each unit test (or maybe just for each test suite). It is unlikely that the developer's one-line change will need to run all 1000 test cases. Instead, maybe only a few of those unit tests actually come into contact with this one-line change. Is there a tool out there that can help determine which test cases are relevant to a developer's code changes?

Thanks!

A: 

You could probably use make or similar tools to do this by generating a results file for each test, and making the results file dependent on the source files that it uses (as well as the unit test code).

nategoose
A: 

Our family of Test Coverage tools can tell you which tests exercise which parts of the code, which is the basis for your answer.

They can also tell you which tests need to be re-run, when you re-instrument the code base. In effect, it computes a diff on source files that it has already instrumented, rather than using commit diffs, but it achieves the effect you are looking for, IMHO.

Ira Baxter
Would I be required to use the tools for obtaining code coverage? Or could we still use our own tools for obtaining code coverage? What would need to be done to integrate the tools with the build process?
DuneBug
You'd have to use these tools as the machinery that does the differential computation is built into them. These tools have both UI and command line capability, so intergrating them into batch build scripts should be straightforward. Further discussion should probably happen offline; see my bio for contact information.
Ira Baxter
A: 

You might try running them with 'prove' which has a 'fresh' option which is based on file modification times. Check the prove manpage for details.

Disclaimer: I'm new to C unit testing and haven't been using prove, but have read about this option in my research.

Ozten
A: 

As far as I understand, the key purpose of unit testing is to cover the entire code base. When you make a small change to one file all test have to be executed to make sure your micro-change doesn't break the product. If you break this principle there is little reason in your unit testing.

ps. I would suggest to split the project onto independent modules/services, and create new "integration unit tests", which will validate interfaces between them. But inside one module/service all unit tests should be executed as "all or nothing".

Vincenzo
We have over 1000 developers committing code into the same branch. They may commit code for services that are completely unrelated (i.e. service A will never use code from service B), but all of these services are bundled into a single image. If we have thousands of units tests, it is not beneficial for us to run every unit test for each commit (i.e. developer working on service A commits code, there is no need to run tests for service B). The entire suite can be run every few days instead. But, running tests relevant to the committed code would be highly beneficial.
DuneBug
@DuneBug How do you know that some services "are completely unrelated"? Where do you keep this knowledge? What if one day the situation is changed and service A starts to use service B?
Vincenzo