views:

398

answers:

5

What are the best processes with for Quality Assurance (aside from testing)? Do you use code reviews, code profilers? Do you use a QA Standards document, or just eyeball code? Also how do you provide feedback to the developers? What do YOU do for QA?

+1  A: 

Having a QA environment where software is certified prior to delivery is a good practice. Providing clear quality metrics in daily reports (system uptime, number of critical bugs, number of problems due to poor process) keeps a clear focus on quality issues if they exist. On problmatic projects involve the development teams in post mortems to determine how initial and ongoing quality can be improved. Any and all communication should be in the positive voice and focus on finding root issues in poor quality. Sometimes just asking development teams what things they are lacking to improve quality solves many issues.

ojblass
+4  A: 
  • Unless you have a very low issue rate, use a database to track issue.
  • Make the QA process interesting by getting developers to do TDD. It's good for them, and it gets rid of the stupid bugs.
  • Automate tests.
  • Get QA involved in the product from the beginning, not just the last 2 weeks.
  • Give them power. QA gets to decide when the product is ready to ship.
  • Give QA a real career path.
Jay Bazuzi
I think the point about giving QA a real career path is a great one. We don't have a QA department or anything like that where I work (way too small for that), but I have seen other people just feel like they are absolutely stuck and have no room for real advancement when they are doing QA.
TheTXI
Easier said than done. But I totally agree.
Yuval A
Problem is the developers are QA as well. That's an important unspoken point, separate QA Team.
C. Ross
A: 

Coding Horror has a good recent post on "Exception-Driven Development", which is somewhat related to this topic.

leander
+1  A: 

I think all QA is multi-pronged. Developers must test their own code to ensure it does what he thinks it should do and it won't break the build. Testers should test against the specifications to see if the developer interpreted them correctly (developer testing almost never finds these problems). Peers should do code reviews to ensure that standards are followed and to promote learning. Users must do testing as well as they will try to do things that nobody expected or defined in the requirements. Often these are stupid things that make us shake out heads and go, "why would anyone every think to do that?" But many others are genuine requirements that were not in the spec because nobody bothered to ask the users what they need. Client acceptance testing should be done especially if you do custom development for different clients. It is far easier to show that a new request for work is new development rather than a bug, if the client has signed off on the work before it went to production. This can save tons of contractual battles over who is to pay for something.

Additionally, having someone else fix your bad code is the best way to ensure that bad code will continue to be produced which why I hate the organizational structure that some companies have of having developers who do new things and support people who do fixes. Further a manager who fixes bad code to save time without sending it back to the developer to fix is causing problems for the organization not fixing them.

Another big part of QA in my mind is taking the time up front to actually define the requirements and standards. (yes they will change through any large project, but they still are needed). Without requirements, testing is random at best. Without standards maintenance can become a nightmare and far more costly than need be.

The last part of QA is learning from our mistakes. Unfortunately in many organizations you can't honestly have a post project discussion of what went wrong and how to prevent that next time without getting into a finger-pointing blame session that causes people be quiet rather than get bad marks against them for their performance appraisal. In fact performance appraisals in general are harmful to the goal of improving quality for this reason among many others. (Look up Deming and Total Quality Management to see the quality guru's thoughts on the harm caused by performance appraisals.)

In theory there should be quality metrics that you can use to measure imporved quality. In practice though, as long as your organization has performance appraisals, these numbers will often be "massaged" to make things look better or measure the worng thing (fewer lines of code doesn't mean better code in all cases and more bug reports might mean we are doing a better job of finding (or at least reporting) the bugs not that the code is worse than in a past project) and are thus useless.

HLGEM
+1  A: 
hlovdal