We have a form of measurement that is used for both development and QA. It's benefit is that is is based on actual activity rather than guessing to "quality" of bugs found.
It is called Cost of Quality.
Basically, everyone (developers and testers) records their time spent on a project in to one of several buckets. (Time can be recorded daily or weekly.)
The buckets similar to this:
- Sprinting (Time spent developing and testing in the dev env)
- Testing (Time spent Testing in the test env)
- Pre-Release Bugs (Time spent on bugs before they are released to production)
- Post-Release Bugs (Time spent on bugs after they are released to production)
(We have several other buckets, like support (for issues that don't have a failure), requirements (for design time during sprint planning, etc) and others as needed.
The idea here is to get ratios of time spent in creation to time spent in fixing bugs.
The way we do it, our QA team tests in dev during the sprint. Time and issues found then count toward creation (for both developers and QA). Once the product is sent to our test environment all QA time is logged under Appraisal. Any issues found and the time spent fixing and retesting them are logged under Internal failure.
After the product releases to production, any time spent on bugs get logged under external failure.
The idea is to find out how much time is being spent on internal (or even worse) external failures. This lets you know how well QA is really performing.
We find that these numbers reflect reality much more than an artificial "bug count" or some such measurement.
Just like scrum, this takes a while before everyone records it right. But once you get it going, it provides some really good metrics.