What techniques do people recommend to track the quality level of a new program? Are their ways to take a poorly defined term like "quality level", quanitify it and then make predictions? Currently I use bug rates and S curves but I am looking for other ways to evaluate, estimate and predict quality levels.
This depends a lot on what kind of comparisons you're trying to make. If you're looking at a single project over time and the team doesn't change, then bug rates might be meaningful. However, if you're comparing different projects, with different teams, there's really no way to compare things like bug rates, because you are really comparing the rate of known bugs. One team may be much better at identifying bugs than the other, making their bug rate look higher, but they're really the ones with better software quality.
Are you looking for this measure?
More seriously, for me code quality is about maintainability:
- how easy it is to fix a bug, to add/remove/modify a feature,
- how easy it is to refactor: is a regression test suite available?
- how much technical debt?
Remember that you write code once, but you read it several times.
I am not sure what counting your bugs is going to do with out something to compare it with. What if the software you made was very hard and had lots of edge cases? You need some kind some kind of comparison...
Even though one of the other answers was clearly a joke code reviews are also probably a good idea. If you have too many bugs hire better engineers or have them write less code.
Edit: Added after considering comments...
Every bug is a like a unique sucky snow flake (a suckflake?). They have different levels of impact on your customers and developers. I would at least take this into account. Maybe adding severity (a measure of customer push for fix) and engineering hours spent fixing it might help improve accuracy. My concern I guess is that this is still over simplification of "quality" when it comes to developing software.
Sadly software quality != product quality. A recently released game called Fallout 3 won tons of awards and made lots of money (at least I assume) but was also a buggy hunk of junk on PC at least.
Just make sure you are tracking and optimizing the correct thing. Tracking # bugs vs time is just tracking # of bugs vs time. Reading any more into it requires some level of assumption however correct or incorrect.
What are your goals? Bugs are but one part of software quality. If you want to continue you to support your software maintainability is key. A lot of times bug fixes can make things less maintainable if your coder did things in a rush. This in turn makes future fixes and features harder and fixes can add new bugs.
Do you do unit testing? Code coverage on unit tests can be a decent measure of quality.