views:

165

answers:

11

All,

I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.

Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test. Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.

I would appreciate your views on this.

Thanks, Byte

+3  A: 

Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.

For the example you give, I would do something like this:

  1. For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
  2. I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
  3. I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).
Patrick Cuff
Thanks Patrick. I'll look at Microsoft PICT tool, seems interesting.
byte
+4  A: 

Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.

Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?

Tom E
Tom, your view on looking at existing bugs are good. I had not thought of that.
byte
Yes, this is actually a good way to kind of bootstrap a good testing system: you would manually test if your fix for a bug worked, so why not write a test case for it? And while you're at it, you can try a couple different values, including some you expect to raise exceptions on.This way you are accomplishing your task (fixing the bug) AND working towards a better development infrastructure (writing tests).
stw_dev
A: 

Group brainstorming sessions. (or informally in pairs when necessary)

Kimball Robinson
A: 

Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.

Kimball Robinson
A: 

Identify your assumptions from different perspectives:

  • How can users possibly misunderstand this?
  • Why do I think it acts or should act this way?
  • What biases might I have about how this software should work?
  • How do I know the requirements/design/implementation is what's needed?
  • What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
  • Is the right software being built?
  • Do I really know what a valid name/phone number/ID number/address/etc looks like?
  • What am I missing?
  • How might I be mistaken about (insert noun here)?
Kimball Robinson
A: 

Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.

Kimball Robinson
A: 

Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.

Kimball Robinson
A: 

Read relevant sections from Lessons Learned in Software Testing to get an idea of different test dimensions.

Kimball Robinson
+1  A: 

Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.

  • Who
  • Whose
  • What
  • Where
  • When
  • Why
  • How
  • How much

Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.

  • Who else
  • Whose else
  • etc..

Then, ask the "not" questions--negate or refute your assumptions, and challenge them.

  • Who not (eg, Who might not need access to this secure feature, and why?)
  • What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
  • etc...

Other modifiers to the qustions could be:

  • W else
  • W not
  • W risks
  • W different
  • Combine two question words, eg, Who and when.
Kimball Robinson
+1  A: 

In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:

  1. Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.

  2. White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.

There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.

JB King
Black box and white box testing are focused on coverage heuristics, or "what" is being tested. Kaner, Bach and Pettichord list at least five dimensions (hint, one other dimension is "who tests" - developers, users, testers, etc). Other _coverage_ metrics in your vein would include screenshot/menu tours, configuration testing, and several others. Read Lessons 48-54 of Lessons Learned in Software Testing.
Kimball Robinson
A: 

Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.

Kimball Robinson
Thank you for all your answers. Appreciate that. +1 for all answers
byte