views:

142

answers:

2

How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter.

In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET.

Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case...

Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input.

Not an option are:

  • decimals (too slow)
  • fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...)
  • update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option

Any ideas?

+2  A: 

I'm not sure of the exact answer for your question but you could use C++ and do all your float work in a c++ dll and then return the result to .Net through an interopt.

If I would write the critical part (simulation) of the game in C/C++ then I can just write the whole game without .Net. Maybe that's the "right" way here.
Look at the C/C++ IJW (It-Just-Works) functionality of the MSVC compiler. It'll let you write what you need to write as unmanaged code, and do the rest as managed code to interface smoothly with the rest of the managed parts of your game.
Ants
+1  A: 

Bitexact results for different platforms are a pain in the a**. If you only use x86, it should not matter because the FPU does not change from 32 to 64bit. But the problem is that the transcendental functions may be more accurate on new processors.

The four base operations should not give different results, but your VM may optimize expressions and that may give different results. So as Ants proposed, write your add/mul/div/sub routines as unmanaged code to be on the safe side.

For the transcendental functions I am afraid you must use a lookup table to guarantee bit exactness. Calculate the result of e.g. 4096 values, store them as constants and if you need a value between them, interpolate. This does not give you great accuracy, but it will be bitexact.

Thorsten S.
The four base operations produce different results even on a single processor when the SSE unit and the FPU are not properly configured (disable subnormal numbers for both, use 32 bit or 64 bit values on fpu instead of maximum precision). Transcendental functions can be implemented in software using this controlled environment in C/C++.