I have a persistent object with 7 pertinent fields.
The fields can hold the number of values listed here:
Field # of Possible Values
1 5
2 20
3 2
4 2
5 19
6 2
7 8
Which is a potential for 121600 unique objects.
The code under test is a number of filters that grab a certain number of these objects based on the values of their fields, and then puts them in a bin for use by another system. The bin depositing is trivial, tested, and works properly... it's just that the filtering isn't working. There seems to be many edge cases that aren't being covered and many objects are being placed in a bin when they shouldn't be selected at all or vice versa.
All in all, there are 9 filters which operate in a chain of responsibility, each filter putting objects in a bin until the bin is full, at which point the chain exits. The last filter in the chain is simply a 'filter' that sends an e-mail to an admin noting that the objects are running low (ie, if the chain reached this filter, then the bin isn't full, and something needs to be looked at).
So my problem is this: How do I test these filters? I could create one of each unique type of object using a series of for statements:
public void FixtureSetup()
{
for(each possible value for field 1)
{
for(each possible value for field 2)
{
// ... continue with 5 more for statements
// Create Object with each value
}
}
}
But trying to manually figure out what objects should be properly filtered from the resulting collection (even the collection of filtered objects) would be terribly difficult (and if possible, I would have easily done it when I first wrote the filters).
I'm aware that the requirements are at fault because they say something like:
filter 1 gets
- field 1: values 1/2/3
- field 2: values 2/3/4
- etc.
but the results are showing so many edge cases, that each time I change it to include that particular case, something else breaks (and I have no regression tests to ensure that it doesn't) and it's difficult to find out where in the chain the particular issue occurred.
edit> I am trying to test the filters separately, however assume the following:
filter 1 grabs 500 of the 121600 possible objects (according to the filter's criteria). I'm finding that, say 100 (complete guess) of those objects that are grabbed, shouldn't be - and for varying reasons. In order to know, I'd have to go through each one with the user(s) of the other system to know if the result set of each filter is correct. The opposite to that also lingers in my mind... what about all of the object that should have been grabbed, but weren't.
I'm starting to think that this might be a problem in requirements gathering, and not testing.