tags:

views:

48

answers:

3

For my ColorJizz library, I'm expecting slight errors when you do multiple conversions between different color formats. The errors are all very small (i.e. 0.0001 out).

What do you think I should do about these?

I feel like there are 2 real options:

  1. Leave them as they are, with almost 30% of tests failing
  2. Put some kind of 'error range' in my unit tests and pass them if they're within that range. But how do I judge what level or error I should have?

Here's an example of the kind of failures I'm getting:

http://www.mikeefranklin.co.uk/tests/test/

What would be the best solution?

+2  A: 

It seems you are using floating point values, for which rounding errors are a fact of life. I recommend applying an error margin for comparison checks in your unit tests.

Leaving even some of your unit tests failing is not a realistic option - unit tests should pass 100% under normal circumstances. If you let some of them fail regularly, you won't easily notice when there is a new failure, signifying a real bug in your code.

Péter Török
Is there any preferred approach to automatically adapting returned results to become future expected results? On many implementations, if a formula which doesn't use transcendental functions doesn't change, the output shouldn't either. A tiny change in output will often not indicate a bug, but may suggest that the way something is being computed has changed and should be investigated.
supercat
@supercat, I don't know of any. This should always be decided on a case by case basis - a difference which is acceptable in one case may be an error in another. Which is which? should always be decided by a human.
Péter Török
+2  A: 

The error range is the standard approach for floating point "equality" tests.

NUnit uses "within":

Assert.That( 2.1 + 1.2, Is.EqualTo( 3.3 ).Within( .0005 );

Ruby's test/unit uses assert_in_delta:

assert_in_delta 0.05, (50000.0 / 10**6), 0.00001

And most other test frameworks have something similar. Apparently qunit is one that does not have something similar, but it would be easy enough to modify the source to include one of your design.

As for the actual delta to use, it depends on your application. I would think that 0.01 would be actually pretty restrictive for humans to visually identify color differences, but it would be a fairly lax requirement mathematically.

Mark Rushakoff
+1  A: 

Gallio/MbUnit has a dedicated assertion for that specific test case (Assert.AreApproximatelyEqual)

double a = 5;
double b = 4.999;
Assert.AreApproximatelyEqual(a, b, 0.001); // Pass!
Yann Trevin