views:

152

answers:

3

Upper management wants each group to show year over year improvement (i.e. demonstrate gains with data, not just state an opinion). How have you shown improvement in QA? What metrics have you used?

This isn't about rating one tester over another. It is about showing a department's growth and to provide individual tester's the ability to highlight personal improvement.

+3  A: 

It's important to be clear about what it is that your QA department does. This'll vary somewhat from company to company, but ultimately, QA is a data-gathering operation. Number of bugs filed per person/project is easy to measure, but has little to do with how much work the QA team is doing, or how effective they are.

Better to look at the percentage of serious bugs found by customers after release, vs those found by QA. As the testing improves, this number should go down. Also, measure the number of test cases executed against each release. As the QA process matures, you should see testers becoming more productive (through familiarity, or via automation)..

Mark Bessey
+3  A: 

There are a number of misguided QA metrics, including bugs found. It's a nice easy one, but if the software hasn't changed much, the number of bugs found over time will trend to zero.

Measuring individual testers and how many bugs they raise IS a way of providing incentive among the competitive types, but can lead to a lot of small issues being raised too (which can be a good or a bad thing).

Some possible useful metrics include:

  • measured number of new bugs found in the field (ie that you've missed) - this should go down
  • time to retest and close fixed issues
  • number of bugs sent back for clarification (should decrease)
  • number of bug reports closed for invalid test assertion - shows understanding, should decrease

If your goals are also specified - eg move to an automated testing system - that can be a way of measuring. So if you have 10,000 test cases, your metric could be the number of test cases automated, how many of them are passing/failing.

There's a really good article discussing this at: http://www.claudefenner.com/content/detail/QAMetricsPage.htm

Mark Mayo
+1  A: 

How sophisticated are bugs that are found, e.g. is it a simple as just load up a web page and it crashes or are there a number of steps needed to reproduce an error, could be one metric used that could be interesting to see how it goes, though it is somewhat dependent on how well do developers build the software in the first place.

How often bugs are sent for clarification could also be useful as if developers are spending many hours paired with QA just to understand a bug, that isn't the most useful way to spend one's time.

Lastly, it may be worth someone creating a manual of QA 101 here so that some practices and knowledge can be written down and revised over time to show growth in terms of understanding various testing practices and employing useful ones to the situation. Those are my suggestions.

JB King