views:

153

answers:

6

We have a ton of developers and only a few QA folks. The developers have been getting more involved in qa throughout the development process by writing automated tests, but our QA practices are mostly manual.

What I'd love is if our development practices were BDD and TDD and we grew a robust test suite. The question is: While building such a testing suite, how can we decide what we can trust to the tests, and what we should continue testing manually?

+5  A: 

The first dividing line is -- what is substantially easier to test manually, and what is substantially easier to test in an automated fashion?

Those are, of course, pretty easy to figure out, and probably you're going to be left with a big pile of guck in the middle.

My next sieve would be -- user interface issues are among the hardest to test in an automated fashion, although some projects are making it easier. So I'd leave those to the QA folks for a while, and focus your automated tests on small units of back-end code, slowly expanding to larger integration tests across multiple units and/or multiple tiers of your application.

Jim Kiley
+1 for the comment about UI automation. It is hard to maintain a good UI test framework.
MatthieuF
We're a .NET shop and we use NUnit for unit tests and Cucumber with Watir for acceptance tests which exercise the UI.What we've found is that our Cucumber tests are brittle and we don't use them for the BDD style processes they were designed to support.Do you think it would be better to use BDD style tests to test service-layer code, instead of the UI?
bhazzard
Service-layer code, at least in 2010, is going to be easier to test in an automated fashion than UI-layer code. And you can and should do Cucumber-style BDD and testing on service-layer code (although I admit I've never used Cucumber -- I really want the opportunity to do so though!).
Jim Kiley
Thanks for your helpful insights.
bhazzard
+5  A: 

My advice is, automate everything you can possibly automate. Let humans do what they are good at, such as answering the question "Does this look right?" or "Is this usable?". For everything else, automate.

Bryan Oakley
This is sort of what I wanted to hear. But I'm not sure if our QA folks (or even myself) would buy that we can trust the automated suite.
bhazzard
Obviously you can never 100% test the automated suite, but I've worked with a whole lot of human testers that I don't 100% trust either...
Jim Kiley
@Jim Kiley: you are very correct. The advantage of the automated test is that you're reasonably assured it runs exactly the same each time. With humans, especially humans who are forced to run the same manual test over and over, it's easy for mistakes to be made. Manual testing can be mind-numbing.
Bryan Oakley
+4  A: 

+1 to Jim for recommending manual testing of UI elements; it's relatively easy to use a UI automation tool to create tests, but it takes a lot of thought and anticipation to design a test framework that's robust and comprehensive enough to minimize maintenance of the tests.

If you need to prioritize, a couple of techniques I've used to identfiy non-UI areas that would benefit most from additional testing are:

  1. Look at the bug reports for previous releases, especially the bugs reported by customers if you have access to them. A few specific functional areas will often account for a majority of the bugs.
  2. Use a code coverage tool when you run your existing automated tests and take note of areas with little or no coverage.
gareth_bowles
+3  A: 

Take a look at Mike Cohn's article on the Test Automation Pyramid. Specifically, consider what part of the UI really need to be tested that way. Corner cases, for example, are often better tested through the service layer.

Paul Tevis
What is an effective tool to use for testing the service layer? Should we use a traditional XUnit testing framework, or a more BDD style gherkin based framework (i.e. Cucumber, or SpecFlow)?
bhazzard
A: 

It won't hurt to test any new functionality manually to make sure it works to the requirement and then add it to the automation suite for regression. (Or is it too traditional?)

Chanakya
+2  A: 

Manual testing can do the following, unlike automated testing:

  • GUI testing
  • Usability testing
  • Exploratory testing
  • Use variations when running tests
  • Find new, not regression bugs
  • Human eye can notice all problems. An auto-test verifies only a few things.

Automated testing can do the following, unlike manual testing:

  • Stress/Load testing
  • You even can use an automated test suite to test performance
  • Configuration testing (IMHO this is the most benefit). Once written, you can run the same test over different environment with different settings and uncover hidden dependencies that you've never thought about.
  • You can run the same test over thousands of input data. In case of manual testing, you have to select the minimal set of input data using different techniques.

Also, to make a mistake in an auto-test is easier and more likely then to make a mistake during manual testing. I recommend you to automate the most valuable functionality, but nevertheless run the tests (at least sanity) manually before an important release.

katmoon
Plus 1 for a detailed response.
bhazzard