tags:

views:

241

answers:

10

I am working on a java application which has a lot of use cases. Inputs to the application are different kinds of events occuring at different times. This kind of input gives rise to hundreds of test cases. Has anybody faced this kind of scenario? Do you make sure all the test cases are covered before making a release to the QA team? So my question is: what is the best approach for testing programs with lots of test cases?

+1  A: 

It all depends on your time and your budget. In an ideal world, you should do all the possibilities. But we all know this is a little far from reality.

The way you state, it seems that it is a really large project. In this case, it is expected to need to spend a lot of time in testing too. We should mind testing time whenever we estimate any project.

What I'd suggest is to separate the test cases in different categories, and do whatever you can with your time and budget. Try to cover the main categories with more tests than the secondary ones. This is crucial, because usually 80% of the time is spent on 20% of the code. And this is the most important part.

It is also important to prioritize what is necessary for the rest to work. Let's say you have an application where only subscribed users can do stuff. Then the subscribing part should be very well tested, if you want your application to be useful.

Finally, you should try creating some acceptance tests, that will indicate something is wrong (though they won't be much of a help in finding what exactly is wrong).

Samuel Carrijo
A: 

Of course the best approach would be to test all test cases ... But time, money and so on make this impossible ...

One of the approaches would be to group test cases and find a "super" test case by group !

Another one would be to identify critic modules of your application and to manage its test cases in priority.

Matthieu
Why would constraints like time and money make it impossible to test all test cases? If you can't test it (automatically or manually), how do you know that you have delivered what was requested? So you need to test all the test case in some way, and I don't buy it that it should be cheaper or faster (repeatedly) doing it manually.
Mark Seemann
@Mark : in theory, of course you are right... But sometimes, you haven't enough time for implementing new features and testing all test cases... Because you aren't enough computers for automatic testing or for another reason...
Matthieu
Sometimes customer prefer ontime features with small bugs than perfect features without bugs...
Matthieu
+5  A: 

If you have hundreds of test cases, I assume that is because it reflects the variability of the input and the requirements of the application?

If you do not write tests for all of these (known) cases, then how will you know whether you have delivered the desired functionality?

The alternative to automated testing is manual testing, and I fail to see how manually testing hundreds of test cases is going to save you any time compared to automated testing.

Mark Seemann
Usually if you're testing only once, manual testing is faster. Automated testing is useful when you must do the same tests over and over
Samuel Carrijo
How often are things tested once and only once?
Matt Grande
A: 

Writing lots of unit testing from the very first step of a new application is not my approach. In general, when I create a new application (from scratch), I first create a prototype, without any UT. Once I have a working prototype on a sunny day scenario, and some peers review it (and approve, improve etc.) - I do two things: I write unit tests that cover the sunny day scenario, and then I refactor my code.

Only then I continue working on the application, but I try to write UT that I consider important. Too much UT might give the wrong impression that the code is fully covered. In the real world, such coverage rarely exists.

Ron Klein
+1  A: 

Write the tests before you write the code. This doesn't mean write tests for every scenario before even starting, this means write one test, pass that test, move on to the next step. It honestly doesn't add that much to development time, especially considering you now know the second something breaks.

Aim for 100% test coverage in all cases.

Matt Grande
A: 

With the number of combinations it is probably not realistic to be able to cover all scenarios.

What you can do is:

  • Test all code where possible atleast once (close to 100% code coverage)

  • Focus on areas where you feel there may be problem to write extra tests

  • Whenever QA finds an error write a test to show the error before you fix it

Shiraz Bhaiji
A: 

If you cannot write tests for everything the next best thing I would suggest is measuring your coverage using some coverage tools like cobertura. This will atleast ensure you are covering most of your code using some black box testing. Measuring coverage is very important when the code base grows otherwise it will be impossible to keep track of how good your test suite is.

OpenSource
+1  A: 

I think you are confusing testing use cases and unit testing. Test cases should try to be at the method level or at most at the class level. In this case, you should attempt to have as best of coverage as allowable in your budget/time constraints and you should write them before/while writing your code.

I think developers should also try to run through the actual use cases and automate them with something like selenium to make sure that the product is solid but ultimately after a reasonable amount of this type of testing, it should be delivered to QA.

Gren
+6  A: 

Don't try to cover with unit tests the whole application from the beginning. Do it in small, incremental steps. Set a small milestone to reach in within a week or two and then start writing tests for the first functionality of that milestone. Then start implementing that functionality. It should be something like this:

Small, incremental steps

  1. Break down the application into smaller feature milestones that you can see at that moment
  2. Choose the most pressing feature that has to be implemented at that moment
  3. Break that feature into smaller tasks
  4. Write a test for one of the tasks
  5. Run the test. It should fail (RED). If it pass your test is broken.
  6. Start write the least amount of code in order for that test to pass. Hard coded values are allowed.
  7. Run the tests (GREEN). They should pass (especially when using hard-coded values). Now you know you have a safety net for future refactorings.
  8. Start refactoring (REFACTOR) your code if there's a need, otherwise go to step 4.

Prepare for change

The advantage of this method, breaking a huge task into manageable pieces is that it gives you the chance to have something finished in within a week or two. Later on, the management may rethink they priorities and you'll have to reorganize the list from the first point above. Another advantage is that having at every step a unit test that backs you up gives confidence an a sense that you are actually accomplishing something, and you may actually deliver something to your management faster than you'd believe because at every step you have a (somewhat) working version of your program. They can see progress and this is very important for both you and them. They see that work is actually being done, and you get the feedback that you need for your application (requirements always change, let's keep them changing as early as possible).

As Gren said, you're probably confusing use cases with unit testing. The actions that a user may take on an application may just as well be handled by a single method in the domain model. So the situation may not be as bad as it seems.

No up front design, even for unit tests

Anyway, don't try to write all of your tests from the beginning. That's the way I was doing it and it was a big fail. Once you do small iterations (test method/ method implementation) you'll become much more productive and self-confident. When writing all of your tests up front, you may notice that due to factorizations necessary to make your first tests pass, you'll need to rethink the whole API that you envisioned when writing the tests in the first place, whereas writing a test, then the implementation, a test, then the implementation, you end up with what it's called emergent design. And this is the best kind of design. This is how design patterns appeared. Design patterns did not emerge from someone who stood all day long and thought about ways to solve the problem.

Ionuț G. Stan
very good answer, SO is a great place to ask for this kind of answer: +1
dfa
great insight. thanks for the elaborate answer.
Geos
Thank you both.
Ionuț G. Stan
+2  A: 

I make extensive use of JUnit's theories in order to minimize test cases:

import org.junit.Assert;
import org.junit.Assume;
import org.junit.experimental.theories.DataPoints;
import org.junit.experimental.theories.Theories;
import org.junit.experimental.theories.Theory;
import org.junit.runner.RunWith;
import static org.hamcrest.CoreMatchers.*;

@RunWith(Theories.class)
public class D {

    @DataPoints
    public static int[] a = {1, 2, 3, 4};
    @DataPoints
    public static int[] b = {0, 1, 2, 3, 4};

    @Theory
    public void divisible(int a, int b) {
        Assume.assumeThat(b, is(not(0)));
        System.out.format("%d / %d\n", a, b);
        int c = a / b;
        Assert.assertThat(c, is(a / b));
    }
}

divisible will be called with any combinations of a and b:

1 / 1
1 / 2
1 / 3
1 / 4
1 / 1
1 / 2
1 / 3
1 / 4
2 / 1
2 / 2
2 / 3
2 / 4
...
4 / 3
4 / 4
4 / 1
4 / 2
4 / 3
4 / 4

Nice, it isn't?

Check also my hashCode/equals checker.

dfa
there is a bug in chrome that prevent me that fix my answer :( :(
dfa
I did it for you :)
Ionuț G. Stan
thanks. I guess I should start using Junit.
Geos
yes, I just tested this code with JUnit 4.5
dfa