views:

55

answers:

3

I am trying to determine what level of confidence my development team has in our automated test base (unit, integration, webtest). I would optimally like to get sensible answers to questions like:

  • What refactorings do you expect to be covered/not covered by tests ?
  • Would you trust a green build to auto-deploy for acceptance test environment ?
  • Would you trust a green build to auto-deploy directly to production ?

I was hoping there were some pre-existing metrics and possibly questionnaires I could use to explore this area. My goal would be to increase the level of automated deployments, but I really only believe we can automate things the developers trust/believe in.

Anyone know any techniques to explore this ?

+1  A: 

Have you thought of calling a meeting and asking them?

Discussing it might bring out possible pitfalls that numbers and metrics couldn't express.

Ben S
krosenvold
+2  A: 

I'm not aware of any existing work in this area. Your questions are good and putting them on a paper with 1-5 stars for each question, that should give you an idea. Other factors to take into account:

  • How often were bugs found after the tests that the tests ought to have caught? You can track this with a special field in the bug database
  • How often do developers run the tests? Here, confidence == 1/frequency. If they believe in the tests, they'll run them often. If they don't believe in them, they won't run them or even see them as a pain.
  • How often does the build fail on your CI server? This is usually an indication that tests are brittle and run only on developer machines or that developers don't run the tests before they commit.
Aaron Digulla
We have fairly decent metrics for the bugs found, although we tend to focus measurement on regressions that slip into acceptance testing. Out test batteries are fairly heavy duty (=time consuming) so we actually rely on CI to catch some of these issues. I think that focussing on developer perceptions may be just as valuable as "hard" metrics, since they probably reflect the *value* you're getting from the tests
krosenvold
+1  A: 

Just ask these questions directly to them.

Especially important is the one about "auto-deploy for acceptance". Find out if there would be great hesitation, and if so, how can you address it.

Here's another idea: Consider a trial period of 3 days, or 1 week, for auto-deploy to acceptance. Have a second discussion after it and assess how it went. Work up to automated production builds.

There are obviously metrics around

  • code coverage
  • regressions
  • bug discovery vs. fixing
  • bugs found by integration server

Collecting these may be interesting fodder for the team, but in the end, it's-- as you say-- a question of confidence, and therefore somewhat emotional. What numbers would give them more confidence.

ndp