views:

162

answers:

6

I've been writing software for years, but have never mastered the art of testing. My typical testing includes thorough run-throughs on my machines, and then testing in various operating systems via VMware. Mainly a brute-force play-with-it-until-it-breaks-or-doesn't approach. Where possible I work on actual hardware, but this isn't always practical.

My question is twofold:

  • How do medium-sized professional development houses do their testing?
  • What common techniques or procedures (outside of unit testing) can apply to a developer team of one. I'm looking for practicality.

Thank you for your time and input.

+3  A: 

For a small team unit tests or automatic integration tests are crucial. Because there are no hands and time for manual testing - the more you automate the better. This includes Continuous Integration.

Set up a separate 'beta' environment that is as close to your production environment as possible. Do most of your tests there - this way you will pick up all the things you've forgotten in your 'release plan'.

Jakub Konecki
+7  A: 

Step 1: Unit Testing

Divide your software into components (which can be anything from single functions up to whole programs) and unit-test those components thoroughly, especially as relates to the API and behavior that the rest of the application can see. (Don't forget to check for failure modes too, but beware of binding too carefully to the exact nature of a failure; it's often good enough to just test for the presence of the right class of exception rather than its exact message.) Make sure those tests pass; you're honing them in on a specification of what the component should be doing. (Automated test running helps here, as does a CI system.) This is important because of…

Step 2: Integration Testing

Test that the compositions of components that make up the application work (this is integration testing). Ideally you'll only be finding bugs in the specifications of things at this point (hah!) and wherever you identify that a component is wrong despite passing its unit tests, that tells you that there is a bug. Whenever things fail to work together despite being told to do so, you've probably got a bug in your specs from the previous step so you typically resolve these things by adding more detail to your unit tests and fixing the components until they work.

Note that to make good integration, you want to keep this stage so that the integration itself is sufficiently simple that it is in the “Obviously No Bugs” class of programs instead of the larger “No Obvious Bugs” class. An integration framework like Spring or a scripting language can help a lot here (though with the latter you have to guard against creating components on the sly; if you create a component then admit it and make sure it has a proper usage contract and unit tests to ensure that it meets its contract).

Where you can, you can make components by composing others together; these higher-level components need to be unit tested as characterized in Step 1 above. This might sound like extra work – it probably is – but it does have the advantage of meaning that you can use automated tests for larger parts of the program. (Alas, it's harder to do all integration tests with an automated test tool; such things tend to work better doing unit tests where you can mock out all the irrelevant parts.) But this doesn't save you from…

Step 3: Acceptance Testing

This is where the overall application is tested to see if it actually does what is desired. This might be automatable, but usually isn't. This is level where you bring in users to let them see whether things are what they expected, though you might want to use internal testers a bit first. How easy this all is depends on the nature of the application.

Note also that user interfaces tend to spend more time in this step than the others, precisely because what makes for a good UI is difficult to impossible to pin down in algorithms (it relates much more to human psychology after all).

A final note: What I've written here sounds like testing is meant to be a laborious process that takes ages at the end of a project. It isn't! You can often get parts of an application done before others, do an integration of those parts (with mocks for the other bits) and test quite a bit of how acceptable this sub-application really is. Of course, when doing this take care to stop users from believing that everything is done; one way is to have dialog boxes that pop up and say things like “magic to happen here”. Silly but effective. :-)

Donal Fellows
Note that it's possible to define more levels of testing than these, but to my mind those things are really just elaborations of what I say above. I admit I like thinking in terms of specifications, but I accept that it's next to impossible to get specs right first time round.
Donal Fellows
Your comments are most helpful. I guess I do these things automatically -most- of the time, but breaking the process apart like you have will help me do it -all- of the time.
Brad
+2  A: 

My testing tool examples are Java based, but I will try to suggest tools which are ported to multiple languages or are language agnostic.

Use unit testing tools like junit (ported to a variety of languages). This will allow you to refactor your code safely. Most code bugs should result in the addition or correction of at least one test.

Use revision control, and setup an automated build environment that check out the code and builds the code. It should then run the automated test suite. If the application uses a database the build environment should have its own database. Use different code branched for production (released) and development code.

Use integration testing tools like HTTPunit or Synergy to test web applications. Tools of this type are basically language agnostic, but your may want to choose a tool which can be extended in the language(s) you are using. For non-web applications, there may not be an equivelent tool for your platform. You may also want to use a performance tool like JMeter.

These tools have some setup costs, but a quick payback. Overall development time may be the same or less than if you didn't use the tools.

Acceptance tests generally don't lend themselves to automated testing. Where they do, included them in the integrated testing. Get acceptance feedback as early and often as possible.

BillThor
+1 for a good selection of tools.
Donal Fellows
+1  A: 

How do the pros do it? That all depends on who the 'Pro' is... There are dozens of different approaches to testing, and plenty of experts to tell you that their way is the one true way. Agile gurus will tell you a very different story from the waterfall gurus. The ISTBQ guys will tell you a very different from the Context-Driven guys.

Unfortunately there is no one true way, and you have to figure it out for yourself. Your approach to testing depends on too many factors. That's probably not very helpful, but you just need to be aware that any answer you get here will be only one option of many, and it may be completely inappropriate for your situation.

Personally, after several years in software testing, I have decided to align myself with the Context Driven school of software testing. See: http://www.context-driven-testing.com

Secondly, from your description of you current approach, that sounds a lot like exploratory testing to me. You may find this material interesting: satisfice.com/sbtm/

Mark Irvine
+3  A: 

As a proffesional tester my suggestion is that you should have a healthy mix of automated and manual testing. The Examples below are in .net but it should be easy to find a tool for whatever technique you are using.

AUTOMATED TESTING

MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.

  • Exploratory Testing
    ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
    http://www.satisfice.com/articles/et-article.pdf
Jonas Söderström
A: 

One thing you can do (combined with all the previous suggestions) is identify the risky and critical areas of your app and try to focus your testing efforts on these areas.

EKI
Any area that isn't tested is risky. I've found this out the hard way rather too often…
Donal Fellows