tags:

views:

285

answers:

7

What is the best way to measure QA in scrum?

We have members who typically test and they are measured against how many bugs they find. If they don't find any bugs then they are considered to be doing a bad job.

However, it is my understanding that the developers and quality people are considered one in the same. I would think that they should be judged against the same metrics... not different metrics then the developers who may also be doing testing work...

What is the best way to handle metrics for QA and should QA people have separate metrics from developers in scrum?

Any documents or links someone can point me to in regards to this?

A: 

This is coming from the game developement side of things so take that into account:

I, as a lead dev, judge my testers not on the quantity of the bugs they find, but on the quality and completeness of their work. The worst thing they can do is report a bunch of weak, conjecture type bugs in order to make some bug quota. I'd rather see well documented bugs that explain exactly what the problem is and how they reproduced it. If there is only a copule of bugs of thsi quality found, so be it and no one should be in trouble.

As such, yes, testers should have seperate metrics from developers. They are not doing the same thing at all. A developer who writes code that gets dinged with many, many bugs whould be reprimanded just as a tester who can't find and tag easily reproducable bugs. A developer who writes clean, easily read and managed code should be encouraged just as a tester who finds obscure bugs, but well documented bugs. Given this, how could they have the same metric, scrum or no?

Michael Dorgan
I thought developers and testers were working together as a team to deliver working software. I suggest to check [Measure UP](http://www.poppendieck.com/measureup.htm) for a different opinion.
Pascal Thivent
A: 

We have a form of measurement that is used for both development and QA. It's benefit is that is is based on actual activity rather than guessing to "quality" of bugs found.

It is called Cost of Quality.

Basically, everyone (developers and testers) records their time spent on a project in to one of several buckets. (Time can be recorded daily or weekly.)

The buckets similar to this:

  • Sprinting (Time spent developing and testing in the dev env)
  • Testing (Time spent Testing in the test env)
  • Pre-Release Bugs (Time spent on bugs before they are released to production)
  • Post-Release Bugs (Time spent on bugs after they are released to production)

(We have several other buckets, like support (for issues that don't have a failure), requirements (for design time during sprint planning, etc) and others as needed.

The idea here is to get ratios of time spent in creation to time spent in fixing bugs.

The way we do it, our QA team tests in dev during the sprint. Time and issues found then count toward creation (for both developers and QA). Once the product is sent to our test environment all QA time is logged under Appraisal. Any issues found and the time spent fixing and retesting them are logged under Internal failure.

After the product releases to production, any time spent on bugs get logged under external failure.

The idea is to find out how much time is being spent on internal (or even worse) external failures. This lets you know how well QA is really performing.

We find that these numbers reflect reality much more than an artificial "bug count" or some such measurement.

Just like scrum, this takes a while before everyone records it right. But once you get it going, it provides some really good metrics.

Vaccano
+7  A: 

You'll always get what you're rewarding, so rewarding people for finding more bugs will give you more bugs.

If you start rewarding the devs for creating fewer bugs at the same time you get some really interesting team behaviour. Great for psych experiments but not for delivering software.

Lunivore
+5  A: 

What is the best way to measure QA in scrum?

Working software. Happy PO. Happy customers.

We have members who typically test and they are measured against how many bugs they find. If they don't find any bugs then they are considered to be doing a bad job.

Scrum is a team sport. We don't measure individuals.

However, it is my understanding that the developers and quality people are considered one in the same. I would think that they should be judged against the same metrics... not different metrics then the developers who may also be doing testing work...

You have a misunderstanding. QA and dev our part of the same team but have very distinctly different jobs. Developers build stuff and testers figure out how to break it. It is a totally different mindset and seperate skillsets. Both dev and QA are commited to the same sprint goals. They are indeed judged against the same metric though: working software.

DancesWithBamboo
A: 

We have QA Acceptance Criteria that are checked by testers after each Sprint. If the criteria are not met, the Sprint either fails or needs some improvement before it is okay to be integrated to the release codeline.

The most important criteria are:

  • Complete and sensible test scripts. The latest test script passed for all cases applicable or appropriate bugs were filed.
  • All bugs filed correctly and with a good explanation as to why it was okay not to fix it during the sprint.
  • Automatic tests run, are complete, can be understood by non-developers and the code coverage is okay. (This is for integration tests only, unit tests don't concern QA.)

This ensures that everyone involved can work to make QA happy. The criteria are not so technical that they cannot be checked by a non-developer (the testers have some technical background) and the Scrum teams know what they need to do to pass the acceptance criteria. It also means that there's no random metric quality check that is easy for smart people to workaround or make work for their advantage. A good test script is a good test script and can't be faked to just look like one.

Anne Schuessler
+1  A: 

Rather than having no. of bugs metrics for QA, have metrics like this :-

Metrics for QA personnel

  • %age of bugs (issue) came from customer/beta/prerelease/internal user for feature assigned to QE. Compare it with total bugs logged by QE.
  • %age of bug Withdrawn/No valid bugs logged by QA (marked by NotABug/AsDesigned /NotReproducible)

QE Automation metrics:-

  • How much %age of total documented test cases have been automated. Aim for high automation coverage.
  • %age of code coverage. Through unit testing/ white box/automation.
  • %age bugs found through automation and by manually.

Delivering working software is responsibility of both QA and Dev. For dev there might be metrics like :-

  • Delivery of feature to QA within estimated time. Variance of delay is one metric
  • Bugs found in peer code review (before releasing to QA). e.g. Criteria can be per 1K LOC there should not be more than 5 bugs
  • How much code written for unittest. % Test cases covered in unit testing.
  • Bugs found in per 1K LOC
  • How flexible and reusable code is, so that future enhancement/bug fix can be done without making major changes (so that it don't require major QE work hence don't impact our planned estimate)

Our aim is to Avoid bugs through clear requirements, strong communication, high quality code, code review, thorough unit testing and detailed test planning. Relying only on 'bug count' will lead our project to wrong direction.

aberry
Props for mentioning code coverage - it's easy to find zero bugs when no code was tested :o)
JBRWilkinson
A: 

My favourite metrics is number of escaped defects. In agile project escaped defect can be defined as

defect that was not identified during the sprint/iteration

It is often, that due to regular releases we forget that functionality implemented in sprint should be still properly tested. Tracking this number helps you to plan less or more functionality into one sprint/iteration.

Mark Kofman