views:

56

answers:

4

We are currently reporting defect totals back to the development teams on a daily basis. We also track the usual defect metrics, namely:

  • time to detect a defect, and
  • time to fix a defect.

Unfortunately, the teams don't seem to really care if they have the highest defect rates. Or possibly even care about their defect rates at all.

I know a daily report, expressed as a table of numbers, is a bit dry and, in all likelyhood, is most probably fodder for an automatic delete email rule.

But I've worked in companies where people would be horrified to know that they'd introduced a defect. Or, even worse, broken the build. Those that did break the build had a large stuffed monkey put on their desk for a bit of fun as the automatic build system was known as The Build Monkey.

Have you seen any useful techniques of defect reporting?

Or is it a deeper question of culture, specifically a lack of QA culture, that I'm seeing here?

+1  A: 

You see lack of management, not QA culture. Fire 1 guy from team with worst defect rate every quarter and they will care :-) ALternatively, pay premium per quater depending on the defect rate per team.

So, if management don't care and don't action - everyone would not care.

BarsMonster
I'm beginning to wonder if this really is the case. )-:
Rob Wells
+1  A: 

Engineers are competitive.

Set each team in one of 4 houses. Setup a board with a progression bar and an icon for each team. Each team will move closer to the goal at a pace of how many defects less than the next best team has been found on their code. The prize doesn't really matter that much, they will try to win. Just make sure that it's not always gryffindor that wins...

EKI
+2  A: 

Attaching rewards or punishments to bug counts is a dangerous business and is unlikely to give you the result you want. There are too many ways to reduce bug counts without actually improving quality. I could just do fewer releases (and say I am 'doing more testing' when really I'm just stalling for time). I could make my code really hard to test. I could distract the test team with 'training'. None of which would help improve quality.

The fact that bug counts are different for different teams is meaningless unless they are all working on the exact same code and features, and it is being tested by the same people. If one team has high bug counts, it could be that the code they are working on is complex. It could be that the test team are better at testing those types of things. It could mean the specific testers on those features are more experienced and worked a bit harder to find those bugs. Or it could mean the test team took did less testing for the teams that have lower bug counts. Wouldn't it be more interesting to know if any of these things are contributing to high (or low) bug counts?

The developers may not be very interested in the bug count numbers because they are not very meaningful, or at least comparisons between teams is not very meaningful to them. Perhaps you should stop reporting numbers, and start providing something more useful.

I'm not saying bug counts are not useful. How you use them is the point. Use them to indicate where you need to ask questions. Increasing bug count trend? That indicates something interesting. You should investigate what's going on, and report what you find, not the numbers, but the story behind those numbers.

That's a lot harder to do than just counting bugs. But it is much more likely people (developers, managers, stakeholders) will take notice of your reports.

Mark Irvine
A: 
  1. Don't do daily reporting. Sending out reports too often discounts the value of information in these reports. Do it once a week or once per iteration so the numbers are more meaningful.
  2. Who else gets this defect report in addition to developers? If it's only sent to developers then they don't feel that somebody else cares about product quality.
  3. It doesn't matter actually how many defects you find during the development, unless all of them are fixed and you deliver new functionality at goot speed. What counts is how many of those bugs are faced by your customer, when your product was released. So probably you should concentrate on metrics like escaped defects found or defect detection percentage.
  4. You should have quality criteria for your product when it is shipped to the market. Let's assume it is there should have no P1 bugs in product released to the market. So focus your reporting on how well you meet the quality criteria, instead of just communicating the defects total.

hope it helps

Mark Kofman