There are a number of misguided QA metrics, including bugs found. It's a nice easy one, but if the software hasn't changed much, the number of bugs found over time will trend to zero.
Measuring individual testers and how many bugs they raise IS a way of providing incentive among the competitive types, but can lead to a lot of small issues being raised too (which can be a good or a bad thing).
Some possible useful metrics include:
- measured number of new bugs found in the field (ie that you've missed) - this should go down
- time to retest and close fixed issues
- number of bugs sent back for clarification (should decrease)
- number of bug reports closed for invalid test assertion - shows understanding, should decrease
If your goals are also specified - eg move to an automated testing system - that can be a way of measuring. So if you have 10,000 test cases, your metric could be the number of test cases automated, how many of them are passing/failing.
There's a really good article discussing this at:
http://www.claudefenner.com/content/detail/QAMetricsPage.htm