tags:

views:

394

answers:

7

We are a small customer software development shop that works on variety of different projects for a variety of different customers. We are continuously adding feature requests and making minor bug fixes to these projects. The customers business changes ... their software needs to change.

In our environment the burden of the testing is put on the customer, they are the domain expert, and it also minimizes some of the cost for them. I don't want to get into the conversation of we should hire a tester, or testing charges should be part of the overall cost and so forth.

For a new project, the time and cost of testing is included, it's just in all the feature requests that get added over time where we start to have some testing issues. The customer wants changes rolled out into their environment as soon as possible ... which nowadays ... is very doable.

My question is, aside from unit testing from a code standpoint, what are best practices for test plan documentation? Is a simple document for each feature request work? Should the test process be done in a peer review? I'm looking for some insight on how others are handling this process.

Again ... we are small custom software development shop ... each developer, is designer, coder, tester and implementer ... it all just depends on the day and the time! :)

Clarification

The one thing I don't want to happen is for development to be this huge administrative burden with paperwork and reviews.

I'm trying to find a good balance! :)

+2  A: 

The best testing strategy I've used is to watch the users in operation, and do my best to approximate what they're gonna do with the application. Real-world test case scenarios, i.e. clicking something, going to the next screen, entering data, submit, etc. to accomplish a typical user goal are not 100% guaranteed to find every error, but combined with unit testing, they form a robust testing platform. Most importantly, you'll know that the main things the user wants to do aren't broken.

devinmoore
I tried that once... it is WAY too time consuming. Living next to the customer at his/her shop usually means that they exploit you and let you develop new stuff at thei whim. You must guide the customer toward the solution you re-engineered for him/her, not the other way round...
Manrico Corazzi
I agree Manrico ... I to have ran into that problem.
mattruma
+1  A: 

This seems the ideal soil for agile methodologies: small company, small project, skilled people, requirements rapidly evolving, customers personally involved. In fact you might give Scrum a try letting the customer play the role of Product Owner.

As for acceptance tests you might try some tool like Fitnesse

Manrico Corazzi
+1  A: 

Wow, there's so much to say here. But a couple quick thoughts ... 1. Look at risks, decide where it is most important to spend your limited testing time. What features are the most used? Most potentially problematic? Newest code? 2. Document exactly what will and will not be tested, minimize misunderstanding with the team and customers. 3. Use automation where possible and sensible, it's not a magic bullet.

+3  A: 

This is a really huge question and deserves a really long and detailed answer.

For your testing to be worthwhile, it must be based off the requirements for your features. Ideally you would be able to go:

  • requirements document with clearly enumerate and identified requirements
  • test design that is based directly off each testable requirement (how you intend to test each feature). This can possibly be skipped if you want to save time/effort.
  • tests document that details the exact steps to take to test each requirement. This document should be a signoff document that is your record that on Day X, the customer ran these test steps and demonstrated that the feature worked.

Ideally you and the customer would agree on this in advance of implementing the feature, but in the real world that's not always possible.

Stewart Johnson
We have decided to create formal test plans for our overall project that matches up with our requirements. We are then going to create simple test plans, quick and dirty, for feature requests and bug fixes. Thanks for the input!
mattruma
+2  A: 

Where I work now, we do a combination of user assisted testing and requirements testing. When I worked at smaller companies (i.e. 6 guys in a house) we simply ran through the application and looked for broken items. Its dangerous to say that's all you need, but if your developers are all wearing multiple hats one more can't hurt.

Peer reviews and manager/customer testing are good additions. I know when I worked at smaller places, those two would have been gold. Instant feedback which the developers can then act on is worth every minute spent testing.

Abyss Knight
+2  A: 

The one thing that's bitten me worst over the years is regression: You fix one thing, test that and it's fine, but you broke two completely unrelated things which your test didn't find. The best technique to avoid regressions is automation. But making one person responsible for writing automated test harnesses is doomed to fail in small shops (they'll see it as a demotion and quit), so the best solution is to make each developer responsible for writing a test harness for their apps at the same time as they develop the apps, and ideally that harness should run automatically and generate a pass/fail (this isn't always possible for UI-heavy work, but you can at least try).

Most third-party test programs we ever tried were effectively junk - they just couldn't cope with custom controls.

Bob Moore
There are really a couple types of tests that need to be done, I agree with the automated testing, but what about ui testing, including look, feel, flow and interaction?
mattruma
+2  A: 

A set of related checklists may be more powerful than a set of detailed scripts. Assume that unit testing is done as part of the dev process and don't create a checklist for that test level. For the higher test levels, create a checklist by hosting a workshop and collaborating with dev, test, and ideally some customer proxy if you can't actually include customers.

The advantage of checklists is that they support many different types of tests, from manual to automated. They are easy to revise and improve over time. They work well as a way of collecting evidence that tests were run since each item can be checked off along with a link that points back to a folder where there might be evidence that the item was run.

Further, they do not assume that your testers have no knowledge of how to use the system - in fact they encourage some investigative-style testing that might go beyond the bounds of a detailed script. A problem with detailed scripts is that they become the only thing the testers go through and they become like a form to fill out instead of a guide for thinking about testing.

Step 1. Choose your test levels. Define what each 'level' means to the group. For example, you might decide that you have functional testing, system testing, and acceptance testing as the three test levels for the product. You can make these levels thinner for minor releases and thicker for major releases but the idea should be to stick to using your test levels for every release.

Step 2. Host a workshop to create checklists (or testing backlogs). Each test level should have pass/fail criteria and an accompanying checklist of things that have to be tested for the level to be considered complete. Don't make the checklist items 'test cases' - make them the requirements/features instead. Each test level will have a different type of checklist item. For example, you could choose to define 'system testing' as tests run at the use case level. So you have one set of tests to run for each use case, and you measure progress by how many use cases have thus far been tested adequately.

Step 3. In that same workshop, define a testing workflow for each test level. So that it is clear who does what when. This is where you can discuss how much of each checklist needs to be run for each release if not them all (defining thin and thick versions based on minor/major releases). You can also discuss how much test automation you can afford at each level (that is an economic consideration after all much more than a technical consideration). Testing workflow should include who publishes test results and where, and who evaluates the test process including the checklists. Ideally there is some element of feedback in the test workflows at each level so that the organization has a chance to continually improve.

Step 4. If you have a discrete testing phases then you can use a burn down chart to track the test progress. For example, if user acceptance testing takes two weeks for a major release, then you can create a burn down chart showing actual progress against ideal progress based on the number of items in the checklist that have been completed versus how many have to be completed in that period of time. You may have to weight the checklist items in order to make this more accurate (since some checklist items might take longer to test).

Step 5. Prepare the team for giving/receiving feedback without knowing the exact details of test cases and test scripts that will be run; using checklists might be an adjustment if you are used to having detailed scripts. Prepare for that.

Bottom line: use checklists. Define them as a group and continually work them as a group.

Adam Geras