views:

98

answers:

4

What techniques do people recommend to track the quality level of a new program? Are their ways to take a poorly defined term like "quality level", quanitify it and then make predictions? Currently I use bug rates and S curves but I am looking for other ways to evaluate, estimate and predict quality levels.

A: 

This depends a lot on what kind of comparisons you're trying to make. If you're looking at a single project over time and the team doesn't change, then bug rates might be meaningful. However, if you're comparing different projects, with different teams, there's really no way to compare things like bug rates, because you are really comparing the rate of known bugs. One team may be much better at identifying bugs than the other, making their bug rate look higher, but they're really the ones with better software quality.

Chris Upchurch
+1  A: 

Are you looking for this measure?

More seriously, for me code quality is about maintainability:

  • how easy it is to fix a bug, to add/remove/modify a feature,
  • how easy it is to refactor: is a regression test suite available?
  • how much technical debt?

Remember that you write code once, but you read it several times.

mouviciel
+1  A: 

I am not sure what counting your bugs is going to do with out something to compare it with. What if the software you made was very hard and had lots of edge cases? You need some kind some kind of comparison...

Even though one of the other answers was clearly a joke code reviews are also probably a good idea. If you have too many bugs hire better engineers or have them write less code.

Edit: Added after considering comments...

Every bug is a like a unique sucky snow flake (a suckflake?). They have different levels of impact on your customers and developers. I would at least take this into account. Maybe adding severity (a measure of customer push for fix) and engineering hours spent fixing it might help improve accuracy. My concern I guess is that this is still over simplification of "quality" when it comes to developing software.

Sadly software quality != product quality. A recently released game called Fallout 3 won tons of awards and made lots of money (at least I assume) but was also a buggy hunk of junk on PC at least.

Just make sure you are tracking and optimizing the correct thing. Tracking # bugs vs time is just tracking # of bugs vs time. Reading any more into it requires some level of assumption however correct or incorrect.

What are your goals? Bugs are but one part of software quality. If you want to continue you to support your software maintainability is key. A lot of times bug fixes can make things less maintainable if your coder did things in a rush. This in turn makes future fixes and features harder and fixes can add new bugs.

fuzzy-waffle
A couple of people responded that they did not understand how bug rates are meaningful. On the Y axis, chart the total number of bugs found. On the X axis, chart time. Over the course of development and test the graph forms an S. A flattening curve can predict quality when you will be bug free.
A measure of quality is not in the bug rate itself but in its evolution, i.e., its derivative. This shows that the code gets better (or worse).
mouviciel
As fuzzy said, it's only meaningful if the concept of a bug is rigidly defined, which is both difficult to do and asinine.
Benson
A: 

Do you do unit testing? Code coverage on unit tests can be a decent measure of quality.

Matt Brunell
Yes, that is a good point about code coverage at unit test. Thanks.I believe I heard about using Monte Carlo simulations as a predictor. Has anyone actually used this technique? How does it work in real life?