+5  A: 

C++ allows the program to retain a higher precision for temporary results than the type of the subexpressions would imply. One thing that can happen is that intermediate expressions (or an unspecified subset of them) are computed as extended 80-bit floats.

I would be surprised on the other hand if this applied to C#, but even if it does, the C# compiler doesn't have to choose the same subset of expression to compute as 80-bit extended floats. EDIT: See Eric's comment below.

More details

Another instance of the same intermediate precision problem is when the compiler uses the fmadd instruction for what is a multiplication followed by an addition in the source code (if the target architecture has it—for instance, PowerPC). The fmadd instruction computes its intermediate result exactly, whereas a normal addition would round the intermediate result.

To prevent the C++ compiler from doing that, you should only need to write floating-point computations as three-address code using volatile variables for intermediate results. If this transformation changes the result of the C++ program, it means that the above problem was at play. But then you have changed the C++-side results. There is probably no way to get the exact same old C++ results in C# without reading the generated assembly.

If it's a little bit old, your C++ compiler may also optimize floating-point computations as if they were associative when they are not. There is not much you can do about that. It's just incorrect. The three-address code transformation would again prevent the compiler from applying it, but again there is no simple way to get the C# compiler to reproduce the old C++ results.

Pascal Cuoq
Thank you. The C++ is VC6, and the C# is using C# is .NET 4.0
Jason
Regarding your second paragraph, I refer you to section 4.1.6 of the C# specification, which begins **Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. ... ** See the spec for more details.
Eric Lippert
@Eric Thanks for the information. I am surprised that Microsoft included this caveat in its specification, since I would have thought most processors had the SSE instruction set with "real" doubles when they started work on .NET. Java has more guarantees on floating-point computations, for instance (there is a whole article co-authored by Kahan on the subject that this strictness prevents some optimizations and is therefore misapplied. I never which side he's going to be on :)
Pascal Cuoq
Remember, .NET runs on a *lot* of different platforms, from embedded devices to high end servers to mac web browsers running Silverlight. There's no common denominator to the processors really. For some more issues see for example http://stackoverflow.com/questions/2342396/why-does-this-floating-point-calculation-give-different-results-on-different-mach
Eric Lippert