views:

91

answers:

2

I am using .NET 2.0 with PlatformTarget x64 and x86. I am giving Math.Exp the same input number, and it returns different results in either platform.

MSDN says you can't rely on a literal/parsed Double to represent the same number between platforms, but I think my use of Int64BitsToDouble below avoids this problem and guarantees the same input to Math.Exp on both platforms.

My question is why are the results different? I would have thought that:

  • the input is stored in the same way (double/64-bit precision)
  • the FPU would do the same calculations regardless of processor's bitness
  • the output is stored in the same way

I know I should not compare floating-point numbers after the 15/17th digit in general, but I am confused about the inconsistency here with what looks like the same operation on the same hardware.

Any one know what's going on under the hood?

double d = BitConverter.Int64BitsToDouble(-4648784593573222648L); // same as Double.Parse("-0.0068846153846153849") but with no concern about losing digits in conversion
Debug.Assert(d.ToString("G17") == "-0.0068846153846153849"
    && BitConverter.DoubleToInt64Bits(d) == -4648784593573222648L); // true on both 32 & 64 bit

double exp = Math.Exp(d);

Console.WriteLine("{0:G17} = {1}", exp, BitConverter.DoubleToInt64Bits(exp));
// 64-bit: 0.99313902928727449 = 4607120620669726947
// 32-bit: 0.9931390292872746  = 4607120620669726948

The results are consistent on both platforms with JIT turned on or off.

[Edit]

I'm not completely satisfied with the answers below so here are some more details from my searching.

http://www.manicai.net/comp/debugging/fpudiff/ says that:

So 32-bit is using the 80-bit FPU registers, 64-bit is using the 128-bit SSE registers.

And the CLI Standard says that doubles can be represented with higher precision if the hardware supports it:

[Rationale: This design allows the CLI to choose a platform-specific high-performance representation for floating-point numbers until they are placed in storage locations. For example, it might be able to leave floating-point variables in hardware registers that provide more precision than a user has requested. At the Partition I 69 same time, CIL generators can force operations to respect language-specific rules for representations through the use of conversion instructions. end rationale]

http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-335.pdf (12.1.3 Handling of floating-point data types)

I think this is what is happening here, because the results different after Double's standard 15 digits of precision. The 64-bit Math.Exp result is more precise (it has an extra digit) because internally 64-bit .NET is using an FPU register with more precision than the FPU register used by 32-bit .NET.

+2  A: 

With the Double type you will get rounding errors, as fractions in binary get very large very quickly. It would possibly help if you used the Decimal type.

Steve Ellinger
I (think) I understand that, but any rounding errors that occur on the same calculation on the same input on the same hardware should at least be consistent, right? Or is there no guarantee of that due to some other factors?
Yoshi
+2  A: 

Yes rounding errors, and it is effectively NOT the same hardware. The 32 bit version is targeting a different set of instructions and register sizes.

winwaed
That's interesting - are you saying that there are a different set of FPU instructions? Admittedly I don't know how Math.Exp is implemented, whether it's one FPU instruction or many. And I would have thought the FPU registers are the same in both platforms because I'm using the 'double' type.
Yoshi
I don't know the minutae of the .NET implementation or the x64 fpu, but I would not have expected them to have been identical. You are also converting from int to double which is introducing an error.
winwaed
I'm going to mark this as the answer because I think it provides the most detail. I found more information at this URL, which explains that 32-bit .NET is using 80-bit FPU registers, and 64-bit .NET is using 128-bit SSE registers: http://www.manicai.net/comp/debugging/fpudiff/
Yoshi
Gabe: Thanks for the typo correction!
winwaed