I've been working on porting a legacy project from Visual Studio 6 to 2008. After jumping a few hurdles I now have the new project building and executing. However, I've noticed that the output from the two versions of the program are very slightly different, as though the floating-point calculations are not equivalent, despite the fact that the code is the same.
These differences usually start quite small (<1.0E-6) but accumulate over many calculations to the point where they start to have a material impact on the output. As one example, I looked at the exact double-precision storage in memory of a key variable after one of the first steps of the calculation and saw:
Visual Studio 6 representation:
0x4197D6CC85AC68D9
Decimal equivalent:
99988257.4183687120676040649414
Visual Studio 2008 representation:
0x4197D6CC85AC68EB
Decimal Equivalent:
99988257.4183689802885055541992
I've tried to debug this to track down where differences start, but the output is from an iterative numerical solver, so it will be a time-consuming process to trace through this at such a high-level of precision.
Is anyone aware of any expected differences between double-precision arithmetic operations of the two versions of the compiler? (Or any other ideas about what might be causing this?)
For now my next step will probably be to try to create a simple demo app that shows the issue and can be more easily examined.
Thanks!