views:

160

answers:

3

Categorization of defects could be used to quickly analyze the readiness of my product to ship as well as identifying where opportunities exist in my development process. I have been looking at methods such as Orthoganal Defect Tracking. Creating criteria to determine what part of a process needs to be revisited seems difficult and this categorization scheme is the best literature on the subject I have found. Are organizations investing heavily in this area? What are your experience with various defect tracking systems for helping you identify weaknesses in your software engineering process? Should I invest a large amount of effort in defect categorization?

Update: Consider a system with 1000+ or more active programs.

+1  A: 

The only categorization I found useful so far is assigning priority. Nothing else matters.

Of course, it would be nice to have "tags" and be able to see all similar things together when you wish to fix something (as you might also want to fix something else in the process). But most of the time, when you think about next release cycle, it's all about stuff that matters and stuff that is not so important for the end-product.

Just my $0.02. I guess this question could be marked "subjective"?

Milan Babuškov
What makes it subjective? So your answer is you don't take defect information back to your development process to look for opportunities?
ojblass
I think it's subjective because same thing does not work for all people. Some people are able to learn from mistakes of others and some other people are not. Ok, you're right: it isn't really subjective, it's more a case-to-case basis. Sometimes it might work, somewhere it won't give you any benefit and you'd only spend resources.
Milan Babuškov
+4  A: 

How big is the project?

The problem with most defect characterization schemes is that it's difficult to make an effective characterization, and so people will tend to make the characterization either too quickly, or to cover themselves with glory (or blame it on someone else.)

In a rigorous development environment, like in aerospace or trusted sysem, there are characterization processes that are followed rigorously, and audited. Without that kind of process, it probably doesn't pay off.

Update

Okay, from the comments, the scale is really big. So in that case, yes, it's entirely possible that you can use defect characterization effectively, and get results from it.

Then it's like most methodology issues:

  1. You need to define what you're doing, what you're measuring.
  2. You need a repeatable method for characterization. A "diagnostic algorithm" or process that people can apply independently and achieve similar characterizations.
  3. You then need to actually apply it uniformly. (May seem obvious but its the failure point surprisingly often.)
  4. You need to audit to make sure it continues to be used correctly.
  5. You need to feed the information back in a usable form.
  6. You need to ensure that people are rewarded for obtaining and using the information.
Charlie Martin
Golly. okay, in that case, add that info, don't delete this. That scale makes it interesting.
Charlie Martin
The gloryboxing of defects for self effacing purposes is something that has troubled me for a long time.
ojblass
Well, we reward people for it, so they do it. I once was on an IBM gig where the number of defects found was measured and rewarded. The defect rates went sky high; the actual improvement in quality was less obvious,
Charlie Martin
A: 

In my experience, the most intuitive categorization is the best. For user-facing servers, for instance, there can be defects in the deployment process that are totally unrelated to the code base. Or for locally installed software, categories like configuration are relevant.

It seems that the purpose of any sort of categorization beyond the intuitive is to try to minimize the number of defects that are silly - like bad coding practice, or miscommunication between teams. But in reality, that stuff happens. And I think one usually already knows where the defects are (missed test cases, inconsistent design, etc.) so I have never found any real value in adding more overhead to defect tracking.

Kai

related questions