How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter.
In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET.
Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case...
Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input.
Not an option are:
- decimals (too slow)
- fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...)
- update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option
Any ideas?