views:

103

answers:

2

In order to avoid too much testing, I would like to provide the Quality Assurance (QA) team with hints on which features have to be regression tested after a development iteration. Do you know tools that could do that on a C++ and Subversion (and visual studio) dev environment ?

Details about the use case:

  1. Features would be defined by the development team in terms of entry points, typically classes or class methods. Say, feature "excel file import" is defined by method ImportExcelFile(...) of class FileImporter.
  2. During the development iteration, the development team commits some changes on some methods of some classes. Say, one of these classes is indirectly used by method ImportExcelFile()
  3. At the end of the iteration, all commits are analysed by the tool and a report is produced and delivered to the QA team. In our example, the QA team is informed that feature "excel file import" must be tested, and that other features X Y & Z are unchanged.

Very probably this tool would use static code analysis and consume subversion APIs. But does it exist ?

+1  A: 

G'day,

What you are describing isn't really regression testing. You're just testing new features.

Regression testing is where you specifically run your complete test suite to see if the code supporting your new feature has broken previously working code.

I'd highly recommend reading Martin Fowler's excellent paper "Continuous Integration" which covers some of the aspects you are talking about.

It may also provide you with a better way of working, specifically the CI aspects Martin talks about in his paper.

Edit: Especially because CI has some hidden little traps that are obvious in hindsight. Such things as stopping testers trying to test a version that has not had all the files implementing a new feature committed yet. (You verify that there have been no commits in the last five minutes).

Another big point is the loss of time if you have a broken build and aren't aware that it is broken until someone checks out the code and then tries to build it so that they can test it.

If it's broken, you now have:

  • a tester sitting around unable to do the scheduled tests,
  • a developer interrupting their current work to go back to previous work to sort out what's causing the broken build. More probably it is developers because the problem is an interaction between two separate pieces, each of which worked on their own.
  • time loss due to the developer(s) having to get back into the mindset for that previous piece of work, and
  • time loss for the developer to get back into the mindset of the new piece of work that they were working on before the interruption to investigate.

The basic idea of CI is to do several builds of the complete product during the day so that you trap a broken build as early as possible. You may even select a few tests to check that the basic functionality of your product is still working. Once again to notify as soon as possible that there is a problem with the current state of your build.

Edit: As for your question, what about tagging the repository when you've done your testing, e.g. TESTS_COMPLETE_2009_12_16. Then when you're ready to work out what the next set of tests do an "svn diff -r" between that latest tests finished tag and HEAD?

HTH

BTW I'll update this answer with some further suggestions as I think of them.

cheers,

Rob Wells
Rob, thank you for your answer. Actually I'm aware --and supporter-- of Martin Fowler publications, and we are using continous integration, including automated unit testing.The point here is that we also have a separate QA team that focus on testing features -- "stories" in terms of XP. We would like to be able to guide them on which story(ies) should be re-tested after a number of commits, especially in order to prevent "over-testing" of stories that couldn't possibly have regressed.
Denis Dollfus
@Denis, cheers. Can your dev's maybe tag the commits for a single user story? Making a single commit when the story complete is probably both dangerous (as in potential loss of work because of the local copy getting lost) and inflexible. I'd suggest maybe tagging the repository when a US is finished and commited. BTW I wish I had a dollar for each time someone has said "couldn't possibly have regressed" to me when it plainly has! (-:
Rob Wells
A: 

Split your project up into separate executables and build them.

Make will rebuild any executable if its dependencies change.

Add the output files of any chained tests to the dependencies of the next test - for example the save file test's output as a dependency of the read file test.

Anything which has been built after this point needs unit testing.

If any libaries use common exhaustable resources ( heap memory, disk, global mutexes etc. ) add them as dependencies too, as exhaustion due to a leak in one library is often a regression failure in another.

Anything which has been built after a certain point needs regression testing.

Unless you are working in an environment which guarentees lack resource exhaustion ( eg TinyC ), you will end up regression testing everything. Regression testing is not unit testing.

Pete Kirkham