Not referring to unit tests and automated tests. How does one go about giving your application to a test team and expect them to report bugs. Do you as the developer create test cases? What does the test team need to know to test your application? Is there a specific guideline about setting up a test team and processes so that an application can be tested?
Always depend of the business but I got used to create my own Unit Test and Integration Test and someone else do the System Test on an other computer.
The test team should know and understand the requirements. They should not know how your code works, but they should know input/output.
My ideal test team would consist of good developers that simply right unit-tests/integration/automation tests, only 10 times more than you have to invest on it.
We've usually done this by providing the Test-Team with pretty detailed User-Stories. The same stories we as the developers received as the "specification" of the software.
I run it.
I haven't really bought all this unit test/TDD hype. I start implementing bottom-up so I can write code and run it to make sure it works. Once satisfied I work on the next step. Repeat as needed. Granted, I've never had the need to make my code bulletproof, so perhaps my method won't scale up to a greater level of scrutiny.
As far as a test team goes, by the time they see the software it should be already tested (incrementally, as described) by the developers. In other words, an alpha-quality build that works. The job of the test team should be to try to break it by testing integration errors and edge cases. Once the test team has done their best to try to break the program and the developers have fixed the flaws identified, you've got a beta-quality release.
In my environment, we have Business Analysts who work with the developers to determine specs (what can you do?, can you do xyz?, how long will it take?). The BA's document, or attempt to, the application as the developers code it. We then test in non-specific milestones of functionality by handing off compiled code to the BA's who try to click all the buttons and ensure the data going in/out is correct. Any bugs found by the BA's are given back to the developer(s) to correct and hand back to the BA to test again. Many times the BA's are put on other projects during development and have a difficult time putting time back into the initial project. It is certainly not an optimal solution though and could use more structure to ensure all the steps are carried out as intended.
The test team should have people who are experts in the field your software operates in. They should get the software and a description of the feature to be tested. It can certainly help to give them a brief test plan describing the testing the developer already did as a starting point. Because they understand the field they can test your software as users rather than developers, which is what you want if you actually want to sell anything (assuming you're not selling to developers, that is). Internally they will create test plans and tracking methods, but from the point of view of development all they need is to understand what the new feature does.
Timelines are entirely dependent on your business and development method. If you're using agile methods you need to have some form of QA there day 1 at the specification stage, before development even begins. If you're using a more waterfall-ish method then it can be acceptable to have a feature-complete product before handing over to QA, but the department should have specs before that so they can begin forming test plans. Planning to form a QA team at the same time you plan to finish development is planning to fail.
Let the testers write the tests. If they are not too experienced with the application, it will be useful to review some tests if you have the time. That way you do not need to reject "bugs" from incorrect tests.
They have to know least UI spec/UI flow and functional spec/user stories. This way no-one has to duplicate the spec on the test cases. Internal workings may be less useful for testing, but it is better to have too much than too little information - it's hard to guess correctly, but very easy to ignore things.
Good testers will find ways to break your code you have never thought of. Also, they might find usability problems.
Most important is good communication between testers and developers. That way, testers will learn to test the right things and provide you with useful bug reports.
Release it and let the clients do it for me. (No, i do not work for microsoft.)
Options:
- Testing Dirty Systems describes a way to have someone not familiar with a system or where the system isn't fully documented can successfully identify defects.
- Find a consultant to guide you.
- Hire someone to guide you.
- do this yourself by considering the following:
You really need to hire a Quality Assurance person and not Quality Control. Quality Control runs tests, Quality Assurance (QA) plans what needs to be tested and how it gets tested. The Quality Assurance person(s) may also perform Quality Control (QC) activities if the team is small. They may also recruit other internal staff to do QC activities. A good QA person will know how to learn how to use a system and can develop plans accordingly. A good QA person is typically more expensive than QC due to the value they should bring. You can't call a QC person your new QA person without training. Even training won't stick if they are not a good fit. If you can't afford QA, but you can afford QC, then you should take a deep look into why that appears to be true because likely it isn't. You may not be able to find a good QA person if you don't know how to screen them. You may need help finding someone. A good recruiter/consultant should know.
You will have an acceptable setup when:
- You've identified the features to be tested (including features to be tested for regression testing)
- You can find test cases easily
- There are guidelines for creating and managing test cases and test results
- Your QA team is involved throughout the process from inception through coding to release.
- You can quantify the quality of a release
- You can see the improvement of the quality of your releases (because you can quantify the quality over a period of time)
- There are less grumpy end-users and upper-management because there are fewer complaints about your system :)