views:

226

answers:

2

I have been working with a small firm over the last few months. We have a Trac instance set up (with projects as Milestones). Developers send completed tickets to a QA user. Someone then picks up the ticket, evaluates whether it's done, and closes it or bounces it back.

Therein lies the problem. Using what criteria should the QAer (sometimes the dev him/herself) evaluate the ticket?

Some ideas come to mind like

Works on platforms X, Y, and Z
Look and feel
Understood Original Task

But I think that if there's a short checklist out there somewhere for QA, or if someone has some ideas, they might really help out. Thanks!

+1  A: 

Not exhaustive, but:

  • fresh checkout, build, install
  • run all automated tests
  • run through whatever your "short list" of manual tests is

Ideally those first two would be automated with the likes of buildbot.

If you're having devs doing the QA, you should also have them peer review the changes.

Beyond that... you're going to have to base it on what you're trying to accomplish and what your business constraints are.

retracile
Right, then each ticket would have to include manual tests too. +1
Yar
+1  A: 

It depends on the environment, but I'd suggest all the following, although it's by no means a complete list:

  • Retest the manual steps on the OLD build to make sure they can reliably reproduce the issue
  • Retest the steps on the new build, and confirm that the issue no longer occurs
  • When doing so, confirm that while there is no issue, that the correct behaviour occurs
  • Consider affected regions of product. Run some sanity tests on those areas to confirm nothing has been unduly hit by this change
  • Check related areas of product - areas the code interacts with, and make sure they are still working (sanity test)
  • If you have any regression tests, these could be run or a selected subset chosen for execution
  • Any automated tests should also be run over the area.

If you have SDETs - Software Developers in Test, if feasible they can peer review the code from the fix as well, this depends entirely on your work environment of course.

If documentation of the fix for release notes is required, the tester should also confirm that either these docs exist (if the developer is meant to write them) and are accurate, or if he/she is meant to write them, that these be done.

Similarly, automated tests that need to be built should be run against the old build first to confirm they identify the issue correctly and reliably, and then against the new build to confirm they pass it successfully.

There are a number of other business-dependent things that can be checked - further quality processes like peer-reviewing the automated test code, or confirming with scenario validation testers that the fix makes sense for the environment it is going into.

Mark Mayo
+1 Very interesting, I can easily see that this topic is huge.
Yar