views:

426

answers:

10

Ideally, in a project you will developers, testers, QA manager(s) etc which all make their contribution to the quality of the code. But what if you don't have that kind of resources? If you just have, for example, three developers and don't have the resources to hire a full time QA manager, how do you assure that the code quality meets set standards?

What kind of things do you pay attention to in quality assurance? Quality isn't just about the code doing what it is supposed to do (code is properly tested with automatic tests). Quality is also about the code being clean (readable, maintainable, well structured, documented, etc).

I'm looking forward to hear what kind of processes you have applied to your team to assure that the quality meets the set standards. We've applied a process where we rotate the QA role between the developers. Each developer is responsible for QA one week at a time. Each changeset is revised and checked that existing tests pass, required new tests have been written, that the code is clean and, of course, that the project builds.

Edit:

Of course, some of this process can be automated with CI, but what I'm looking for is experience of the human factor. I mean, how do you make sure that every developer writes clean code and actually tests everything. Test coverage doesn't tell you if everything has been tested (from an automatic perspective, it's practically impossible to achieve 100% coverage) unless you inspect it manually. And even if the coverage would tell you that something has been tested, it doesn't mean that the actual test tests for the right thing.

A: 

It sounds like you're headed in the right direction, and I trust you're using a bug/issue tracking system. Here are some other ideas.

If you're doing GUI software, it helps to have written test scripts, and also to do ad-hoc testing. The trap with that is, as the developers, everything you do is white-box testing, so you might occasionally ask friends or family who don't know much about the software to mess with it with very little coaching.

Another thing you can do with GUI software is get an automated tool that fires random mouse clicks and key presses at your software, and just leave it running for a long time. It's astonishing how many bugs that can find.

If you have a spare box, you can set up automated builds to be done nightly, hourly, or even after each check-in, and even run unit tests on those automated builds.

The biggest single quality boost I've ever done, though, came after accidentally sending a non-functional release to a customer. After scraping the egg off our faces, we came up with a formal six-page checklist for creating and verifying releases, starting with different build and test machines, each with a freshly-installed OS and appropriate, well-defined tools. There were three different roles (build engineer, tester, release engineer), cross-checking, and each individual initialed each step they were responsible for as they finished it. If anything didn't go exactly according to plan, we fixed whatever the problem was and started over. For most projects, it took about 4-8 hours, and when things didn't work and we had to start over, we sometimes had very late nights, but we never sent out a flaky release again.

Bob Murphy
+6  A: 

As a start, if you haven't done so already I would strongly recommend setting up an automated build that also runs the unit tests, preferably with code coverage to check if there are areas that need more unit test coverage. I'm not a massive fan of requiring 100% code coverage but anything that only has about 60%-80% probably needs looking into.

I've worked in teams where the daily build was done manually and the developer doing the build had to perform the integration work too, as all too often the check-in criteria were "it builds on my machine". Getting an automated build going that gets kicked off either on every checkin or at least once every hour and requiring the developer whose checkin broke the build to fix it is a massive step in the right direction and will (hopefully) ensure improve quality over time.

Code cleanliness is something that I find hard to impress on a team from the outside. In a sense this sounds what you're doing - the QA role cleaning up the code? - but maybe I got this wrong. If I didn't get this wrong I think you'll need to change that. Quality isn't something you can or should bolt on as an afterthought, the people working on the code should strive to achieve the quality goals and code reviews should highlight areas where the original developer needs to improve the code, but not have a "QA person" come in and clean it up. Apologies if I have misunderstood this, but if this is part of your process this needs changing right now because you're doing the equivalent of mum cleaning up the messy teenager's bedroom.

Timo Geusch
+1 for "Quality isn't something you can or should bolt on as an afterthought, the people working on the code should strive to achieve the quality goals and code reviews should highlight areas where the original developer needs to improve the code"The best to achieve this is to do pair programming sessions. You learn a lot from one another and make less mistake. I don't say you should par program *all the time*, but do it regularly.
Stephane
No no, we don't go and clean up after others. The person responsible for QA just gives the feedback to the original developer and that developer is responsible of fixing his own mess.
Kim L
So that's code review, and that's a good thing, indeed :)
Stephane
@Kim L, ah, that makes a lot more sense to me. As Stephane says, code reviews == good.
Timo Geusch
+10  A: 

Do you use any Software Development Methodology, like e.g. Scrum? Scrum is one nice Agile way of working, but there are other good processes too.

We use Scrum. This is a good way to make our teams efficient, but it is also a good way of introducing rules to the way we develop software. Like you - I'm part of a small team. Unfortunatelly we're not blessed with a QA department or any dedicated QA people. Work done during the Sprint should be potentially shipable, so the developers in the team needs to handle the QA job.

In Scrum and e.g. Kanban you use a Task Board to keep track of the current Sprint, and these boards often have a column for tasks awaiting approval by QA. What we do is that when a task is done we move it to "Ready for verification". And then another developer on the team does the QA job. He will:

  • Assure that the new functionality does what it is expected to do/bug has been fixes/etc.
  • Verify that there are sufficient unit tests
  • Do a quick code review to check that the code looks clean and understandable

If there is something not satisfactory in the review he will move the task back to start, and it needs to be fixed before it can enter another QA session.

None of us really have any knowledge about QA, but we experienced a lift in code quality after introducing the verification.

stiank81
We use Scrum too, but it doesn't enforce any QA. Your model sounds pretty much like the one we are using.
Kim L
Having a verification step in your process you can agree on what this "verification" should contain. This is where we do our "QA" - also on code quality. But I don't like the word "enforce", so you should have the team agree on what they would like to do in this process. Hopefully everyone agrees that code quality is important!
stiank81
+8  A: 

Sounds like you are doing lots of things right and asking the right questions.

For the last three years I've worked on 2-4 person development teams without any formal QA. We have very satisfied clients and low bug counts.

This works for us because:

  • Everyone's priority is quality software. We don't pass around a QA role, but all do it all the time. We want our code to look good. And all the developers are eager to write both unit and integration tests, and there's team pressure to make sure the tests are there.
  • We pair program extensively. This small overhead improves quality significantly, and has all sorts of advantages. You develop are shared understanding of what "quality" means, and answer the questions yourself.
  • We have regular "retrospectives" where we ask what we can improve. Related to that, if we have quality problems, the team figures out what needs to change to address this (5 Whys analysis). We have instigated such rules as "two eyes on any checkin" to address problems.

All that being said, quality is ultimately about satisfied users. I try to bring people back to that when discussing quality in the abstract (and arguing about variable names). Ultimately it should be about how the software solves users problems, and not crashing is only the first step.

ndp
A: 

You could setup a server that does static code analysis like Sonar. It can be configured to checkout and build your code once a day and run syntax and semantic checks provided by different plugins (Checkstyle, Findbugs etc.) over your code and produces nice HTML output so everybody in the team can look at the potential problems found in the code.

Be warned though, that there might be false positives.

ahe
+1  A: 

The review of changesets regularly is great; however looking at code and then writing a response back in the associated work item with the changeset and sending it back to the dev can be overly time consuming or wires can get crossed. Code reviews are the way to go.

Once a chunk of code/functionality has been completed then arrange a code review between the developer and either the weekly appointed QA dev or the other dev. They can sit down and look at the code, the implementor can talk through what they have done, why they have done it that way etc. This way the reviewer can look at the code and not have to spend time trying to understand "why have they done it that way?". This way the reviewer can also suggest other, better, ways of doing a certain routine, or teach the implementor a certain feature in the framework being used which they may not have known about.

It's all about learning and passing on the information; this helps improve code.

Hope this helps.

WestDiscGolf
+2  A: 

I did some related research over 20 years ago. I don't think the answer has changed. In a small team, the single most important thing is that multiple pairs of eyes see the code before it goes into the project. I've been on teams that did this successfully in two ways:

  • Group review of critical code. Often the code is presented by someone other than its author.

  • Individual review of someone else's code offline.

These days I'm much less involved in software efforts that really have to work (it's one of the drawbacks of my job that the incentives are not for creating good software but for publishing papers about whatever I create), but I would probably add a third method:

  • Pair programming.

I've plenty of experience with pair programming, and I think it's better suited to solving hard problems than it is to quality assurance, but it's still better than nothing.

Norman Ramsey
+1  A: 

The purpose of a QA department is to identify and:

  • train people who don't know about software quality

  • reassign (or fire) people who don't care about software quality

As such, it is a specialised form of HR, one you think about adding once your HR department grows above 3 people. If you know the names and capabilities of everyone working in your company, you'll quite likely do a better job @120 minutes a week than the average full time specialist would.

This ignores the case (e.g some public sector contracts) where 'QA documentation' is a deliverable in itself, in which case you probably need one person to do that and another person to do QA.

soru
+5  A: 

In a small team, the most important thing is in choosing the best people you can find and to avoid at all costs anyone who will disrupt your development team. If you have someone like this already, get rid of them.

I have found all of the following to be useful for maintaining quality with or without someone playing an official QA role:

  • Automated unit tests
  • Automated builds - as frequent as you can manage
  • Coverage measurement of tests
  • Peer code reviews of checkins
  • Accepted coding conventions and standards
  • Personal branches
  • Frequent merges
  • Eat your own dog food!

Of these, automated tests are the most important, followed by peer reviews. I have not found that group code walkthroughs to be worth the time it costs, but one-on-one reviews either just before or just after check in is usually worth the time. Check-in reviews work best when check ins are kept relatively small and do not combine many unrelated changes.

Personal branches allow developers to make multiple check-ins without affecting other developer's work until it is ready to be merged, but merge frequently to avoid unnoticed problems from an under tested branch.

Christopher Barber
A: 

In my first software job out of school, we were generating large sets of data according to the customer's specifications. It was critical that this data be correct as millions of dollars were dependent upon each of them. We had a small team of three people. One person would write/modify code to generate the data file. The second person would write/modify code to verify the data file. The third person would certify by intentionally corrupting a copy of the data file to make sure that the verification program caught and properly reported all types of the errors. We rotated through each of these positions.

Nowadays, I'm doing software in a different field, and we organize things a little differently, but still try to build quality in from the beginning, so we have fewer problems later on. We have a test team whose job it is to bust that which is left of our developer's egos, but no QA department. But, there is no magic bullet for building in quality. Some things that help on our current teams include ...

  1. Regular code reviews/buddy checks for new code before checking it in.
  2. Running static analysis tools like lint.
  3. Leaving the ego at the door.
  4. Enforcing coding standards.
  5. Checking in the test code used for developing the feature/fix. The test team uses this as PART of their regression tests.

I'm not a tester, and there is much that I do not know about it. However, the first lesson I was taught with respect to it was that a test written to pass is worthless--it must to be written to fail.

Hope this helps. Hope this helps.

Sparky