C++ allows the program to retain a higher precision for temporary results than the type of the subexpressions would imply. One thing that can happen is that intermediate expressions (or an unspecified subset of them) are computed as extended 80-bit floats.
I would be surprised on the other hand if this applied to C#, but even if it does, the C# compiler doesn't have to choose the same subset of expression to compute as 80-bit extended floats. EDIT: See Eric's comment below.
Another instance of the same intermediate precision problem is when the compiler uses the fmadd
instruction for what is a multiplication followed by an addition in the source code (if the target architecture has it—for instance, PowerPC). The fmadd
instruction computes its intermediate result exactly, whereas a normal addition would round the intermediate result.
To prevent the C++ compiler from doing that, you should only need to write floating-point computations as three-address code using volatile variables for intermediate results. If this transformation changes the result of the C++ program, it means that the above problem was at play. But then you have changed the C++-side results. There is probably no way to get the exact same old C++ results in C# without reading the generated assembly.
If it's a little bit old, your C++ compiler may also optimize floating-point computations as if they were associative when they are not. There is not much you can do about that. It's just incorrect. The three-address code transformation would again prevent the compiler from applying it, but again there is no simple way to get the C# compiler to reproduce the old C++ results.