views:

374

answers:

4

I'm interested in how others organise their test scripts or have seen good test scripts organised anywhere they#'ve worked. Also, what level of detail is in those test scripts. This specifically relates to test scripts created for manual testing as opposed to those created for any automated test purposes.

The problem as I see it is this, there is a lot of complexity in test scripts but without the benefit of the principles used in organising a complex or large code base. You need to be able to specify what a piece of code should do but without boring someone to death as they read it.

Also, How do you layout test scripts, I'm not keen to create fully specified scripts suitable to be run by data entry types as that isn't the team we have and the overhead of maintaining them seems too high. Also, it feels to me that specifying the process in such detail removes responsibility from the person actually doing the testing for the quality of the product. Do people specify every button click and value to be entered? If not then what level of detail is specified.

A: 

I try to make manual tests fit into an automated structure---you can have both.

The organization schemes used by automated tests (e.g., the xUnit frameworks) work for me. In fact, they can be used to semi-automate the tests, by stopping and calling for a manual test to be run, or input put to be entered, or a GUI to be inspected. The scheme usually is to mirror the directory structure of the production code, or to include the tests inside the production code, sometimes as inner classes. Tests above the unit level can often be fit into the higher level directories (assuming you have a deep enough directory tree). These higher level tests can go in (mirrored) directories that have no production code, but are there for organizational purposes.

The level of detail---well, that depends, right?

Glenn
+2  A: 

Tests executed by humans should be at a very high level of abstraction.

E.g. a test case for stackoverflow registration:

Good:

A site visitor with an existing OpenId account registers as a stackoverflow user and posts an answer.

Bad:

1) Navigate to http://stackoverflow.com 2) Click on the login link 3) Etc...

This is important for several reasons:

a) it keeps the tests maintainable. So you don't have to update your test script every time navigation elements are relabeled (e.g. 'login' changes to 'sign in').

b) it saves your testers from going insane from the tedium of minute details.

c) writing detailed manual test scripts is a poor use of your finite test resources.
Detailed manual test scripts will divert your testers into writing bugs for minor documentation issues. You want to use your time to find the real bugs that will impact customers.

Matt Andersen
+1  A: 

Tests can be grouped by priority. The BVT/smoke tests could have the highest priority with functional, integration, regression, localization, stress, and performance having lower priorities. Depending on your test pass you would select a priority and run all tests with that or higher priorities. All you need to do is determine which priority a particular test is.

Craig Delthony
A: 

Hey.

Matt Andresen has provided good answer, in general case, but there are situations, when you can't do it that way. For example when you are working on validated applications, that must comply with regulations form other parties like FDA, and it goes through very intensive audit, review, sign off, than 2 answer form your example is required. Although I would opt for moving into automation with HP QuickTestPro or IBM RationaRobot in this case.

Maybe you should try with some tests repository? There are again tools from HP QualityCenter and IBM products, but this can expensive. You could find some cheaper, that will let you organize them into tree structures, by requirement/feature, assign them priorities, group them into test suits for releases, group them into regression testing suits etc...

yoosiba