I have algorithm/computation in Java and unit test for it. The unit test expects result with some precision/delta. Now I ported the algo into .NET and would like to use same unit test. I work with double data type.
The problem is that Java uses strictfp (64bits) for some operations in Math class. Where as .NET uses FPU/CPU always (80 bits). .NET is more precise and faster. Java is more predictable.
Because my algo is cyclic and reuses the results from previous round, the error/difference/more-precision accumulates too big. I don't rely on speed (for unit test). And I'm happy to use .NET precision in production, but I would like to validate the implementation.
Consider this from JDK
public final class Math {
public static double atan2(double y, double x) {
return StrictMath.atan2(y, x); // default impl. delegates to StrictMath
}
}
I'm looking for library or technique to use strict FP in .NET.
Preemptive comment: I do understand IEEE 754 format and the fact that floating point number is not exact decimal number or fraction. No Decimal, no BigInt or BigNumber. Please don't answer this way, thanks.