views:

240

answers:

3

My team is building an analytical dashboard for a SaaS/multi-tenant application using Cognos. The problem I am encountering is the right strategy for testing.

Right now, testing one report with start and end date filters (in month/year format), one dimensional filter and two controls for selecting the measure (there are 7 measures, which can be represented as either sums or distinct counts).

In addition, users can drill through points in the resulting report to detailed transactional data.

Implicit is also that reports for one tenant not display data for a different tenant.

So, here's the problem. Testing for this simple report is taking two weeks, involving hundreds of tests for a huge set of combinations of filters and measures. It seems like gross overkill to me.

Is there a 'strategy' which can be used to reliably reduce the search space and avoid overly repetitive testing?

+1  A: 

Good question! When we generally publish (or want to) new reports based in Tableau that hit our SSAS cube we usually ask a certain group of people to act as a super-user group to use the report like you would if it were in production. Although this may not take a set period of time, say you only have 2 days to test it out, but will continue over the course of a few weeks. In the meantime, bug fixes or alterations can be made and redistributed to this same group without having to take the time to stop the testers, make them wait for a fix, and then continue.

Don't get me wrong, having a deadline launch date is still ideal but having the report actually be put into circulation within a small group often-times helps move things along quicker than going through each parameter test case.

ajdams
Thanks! Great answer. The very deliemma I am facing is the formal test cases are squeezing the focus group testing. So you hit the nail on the head of my chief concern.
+1  A: 

Sounds like they're trying to test "every possible combination" the report could be used in. It might be wise to do that for a few select reports that best represent the typical or critical reports. This will help flush out serious flaws in the design, architecture, or implementation.

But trying to test every possible combination for every report in hopes of finding every bug is impossible. Ajdams suggestion makes good sense and is typical of the "compromises" required. It's all about time, resources, and what makes the most sense for the situation.

So I'd suggest a hybrid of the two (pick a tiny handful of reports to do extensive testing on but make sure they are focused on finding bugs that are likely to be shared by other reports). Then test each individual report using a technique such as ajdams.

Atoms
A: 

What would really help your testing catch up on speed and reliability would be to actually prepare tests scripts that comprise the required functionalities of the reports. As suggested before, a beta group of users will help you catch up on bugs and design flaws while beta-testing.

Joel