views:

175

answers:

8

I would like your opinion (through experience) of implementing stricter processes in the aim of improving quality of shipping software. Assume a fairly large software with lots of processes(89) and databases and messaging, IPC, sockets, web servers and the full works (enterprisy made in Java). Some parts are fairly messy (1000 line functions and the sort) but theres a failry large collection of work done in handling itsy bitsy corner cases too.

Would implementing failry strict process like code reviews for every bug fixes, static analysis runs for all code changes be enough to turn around the product ? What policies have you encountered that really made code reviews happen, rather than a cursory glance and a perfunctory pass to the code being reviewed ?

What other steps / metrics (that you have seen being implemented successfully) would you suggest to increase quality?

+Edit : - The rescue program is being brought in by the managament for the said program and is not a single developer lead venture. Wondering if this is all it takes - a commitment to quality from upper layers and some process to create quality software.

+Edit - The software is already shipping with a lot of customers and has started to bring in dollars of late, which has also suddenly caused hundreds of new defects to be found and reported by the users themselves.

+Edit - The development team is distributed in small chunks under 4 managers in different locations. The dev team size is around 40 and about the same number of folks exists in testing.

A: 

TDD is proably the easiest and most fundamental change that can improve your code. tests and lots of them can serve as the platform on which you work and verify that your product is a good one. from that change/maintenance can be checked and managed against a given known.

MikeJ
A: 

Different levels of software process are called for in different situations. Two guys in a garage creating the next Visicalc should work differently than a team of hundreds working on space shuttle software.

That said, I find code reviews (actually what many folks call walkthroughs) are useful in any environment. I review every checkin with a peer, using the source code control system to view differences, and justifying each one out loud to the reviewer.

Often, just stating the purpose of a change and what it is supposed to do will cause me to see an error in my own work, before the reviewer notices it.

Jeff C
A: 

Ask, out loud, questions like

  • What kinds of inputs will cause this function to behave improperly?
  • What kinds of outputs will cause this function to mess up the calling function?
  • What pie in the sky scenario will cause this code to behave improperly?
  • Are there any race conditions? Security holes?
  • What happens the second time this code is called? Is everything initialized properly?
  • What if the user does something unexpected?
  • Is this code unclear in any way? (A clarifying comment may be required.)
  • Is the code written in a way that might encourage a future editor to introduce a race condition, security hole, or other problem?
  • Does the code conform to company standards?
  • This list is incomplete but you get the idea

Having an explicit checklist can help break through the tendency to say "So-and-so is smart, I'm sure it works fine."

amo
A: 

Create a strict commercialization process whereby a commercialization group (can include devs if no QA group is available) does final release testing on the product and approves the release. They are also responsible for testing clean installs and upgrades to the new release. That group is also responsible for packaging and distributing the release.

Turnkey
+1  A: 

You've asked some great questions, and it sounds like that your product is in despair. With a project that large, it will take considerable time (ie, more than a week or even a month) before you start to see large scale results, but here are some suggestions:

  • Implement JUnit for unit testing
  • Investigate Ant for automated nightly builds
  • Instead of having code reviews for ALL bugs, perhaps just have code reviews for the large and more severe bugs, and allow JUnit to be sufficient for the small issues.
  • get management support

What ever you do, if you do not have management's support, and someone at that level to champion this effort it will be hard for a developer to convince all of your coworkers to modify their way of doing business.

In addition, I've got a few questions for you - is this software "shipped" to external customers outside the company, or is it only used by customers internal to your company? How big is your development team?

Erdrick01
+1  A: 

Things that have helped:

  • Automated Unit Testing - catch bugs early
  • Continuous Integration - catch bugs early, make my life easier
  • A collection of test cases for human QA that's updated with each new feature and bug (to ward of regressions)
  • Code Walkthroughs - helps everyone communicate and benefit from each other's experience

Things that haven't helped:

  • More meetings
  • A ton of UML
  • Research to determine which department was responsible for each bug (so the fix goes on their budget)
  • More managers

I work on business apps. I probably don't have the same opinion for the software that runs my pacemaker...

Corbin March
A: 

I would say that the success of any specific technique will depend on your team, and whether or not they see value in what is being done.

If you have a team that agrees that TDD is a Good Thing then collecting coverage metrics is probably not a bad idea. If you have specific goals for the sorts of refactorings that you want to do, then there may be some static analysis which would help (methods with more than X lines, cyclomatic complexity, etc).

The one thing I can be entirely certain of is that if the team does not value the process it will not be effective in making the software better.

ckramer
+1  A: 

Since you're talking about trying to improve your processes one of the things that you need to do is to measure your processes to give you a way of determining if the changes that you're making really are improvements.

This is one of the tenets of any form of quality system (be it formal, like ISO9000, or an informal "we want to make this better") is to use ongoing measurement and analysis to feedback into your processes as continual improvement.

The good news is that it shouldn't be difficult to implement a system of metrics collection that can help.

The bad news is that it's going to take a while. And it's best if you get management buy-in.

Things to look at:

  • Code complexity. Try to monitor the complexity of your code and determine the rate of bugs that you are seeing per code complexity. There are lots of tools around that will let you record complexity. eg. sourcemonitor

  • Record code that you have subjected to code review. If you record this in the code itself you'll be able to determine the bug rate in reviewed code and compare that to unreviewed code. You want to be able to determine just how effective your reviews are.

  • Record the types of bugs and the procedural sources of those bugs. (eg. bugs caused by inadequate requirements, plain coding errors, algorithmic errors, etc.) This will allow you to do two things. The first is to work out the areas that need the most attention (which could be all the way back at the management level). Secondly it will give you some data to back up your findings, to improve management buy-in to any changes.

  • Record bugs that have been missed by your various testing procedures (both QA and unit testing). This should give an indication of whether the problems are with tests that are inadequate, wrong or just plain missing.

Andrew Edgecombe

related questions