views:

1219

answers:

5

I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.

At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.

First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):

  1. user registration form
    1. country dropdown (are countries fetched from the server correctly?)
    2. password validation (are all password rules observed, is user notified if password is too weak?)
  2. thank-you-for-registration

...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.

Secondly, I'm thinking of test cases for testers, with detailed steps like:

  1. Load user registration form.
  2. (Feature 1.1) Check country dropdown menu.
    1. Is country dropdown populated with countries?
    2. Are names of countries localized?
    3. Is the sort order correct for each language?
  3. (Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
  4. Press "OK".
  5. (Feature 2) Check thank-you note.
    1. Is the text localized to every supported language?

This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).

I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.

This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:

  1. PM makes list of features. Customer signs it. (Document 1 is created.)
  2. Test cases are created. (Document 2.)
  3. Programmers implement features.
  4. Testers test features according to test cases. (And report bugs through Document 1.)
  5. Programmers fix bugs.
  6. GOTO 4 until all bugs are fixed.
  7. End of internal testing; product is shown to customer.

Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)

+1  A: 

First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.

For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).

I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.

meade
A: 

You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.

meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.

You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.

If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.

An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.

Paul Johnson
We are using unit tests, however there are things that can't be automated and really require a human; I'm trying to formalize that part of the process. Any documents testers in other companies use would be of great help. I know the theory, now I'm looking for examples which can help me apply it. Unanticipated side effects are precisely the reason I'm doing the whole thing.
Domchi
+1  A: 

David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.

As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!

Jeffrey Cameron
+1  A: 

ive developed two documents i use.

one is for your more 'standard websites' (e.g. business web presence):

http://pm4web.blogspot.com/2008/07/quality-test-plan.html

the other one i use for web-based applications:

http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html

hope that helps.

louism
A: 

Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.

Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.

I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

Schwern