I've encountered an annoying problem in outputting a floating point number. When I format 11.545 with a precision of 2 decimal points on Windows it outputs "11.55", as I would expect. However, when I do the same on Linux the output is "11.54"!
I originally encountered the problem in Python, but further investigation showed that the difference is in the underlying C runtime library. (The architecture is x86-x64 in both cases.) Running the following line of C produces the different results on Windows and Linux, same as it does in Python.
printf("%.2f", 11.545);
To shed more light on this I printed the number to 20 decimal places ("%.20f"
):
Windows: 11.54500000000000000000
Linux: 11.54499999999999992895
I know that 11.545 cannot be stored precisely as a binary number. So what appears to be happening is that Linux outputs the number it's actually stored with the best possible precision, while Windows outputs the simplest decimal representation of it, ie. tries to guess what the user most likely meant.
My question is: is there any (reasonable) way to emulate the Linux behaviour on Windows?
(While the Windows behaviour is certainly the intuitive one, in my case I actually need to compare the output of a Windows program with that of a Linux program and the Windows one is the only one I can change. By the way, I tried to look at the Windows source of printf
, but the actual function that does the float->string conversion is _cfltcvt_l
and its source doesn't appear to be available.)
EDIT: the plot thickens! The theory about this being caused by an imprecise representation might be wrong, because 0.125 does have an exact binary representation and it's still different when output with '%.2f' % 0.125
:
Windows: 0.13
Linux: 0.12
However, round(0.125, 2)
returns 0.13 on both Windows and Linux.