views:

1379

answers:

7

Running a quick experiment related to Is double Multiplication Broken in .NET? and reading a couple of articles on C# string formatting, I thought that this:

{
    double i = 10 * 0.69;
    Console.WriteLine(i);
    Console.WriteLine(String.Format("  {0:F20}", i));
    Console.WriteLine(String.Format("+ {0:F20}", 6.9 - i));
    Console.WriteLine(String.Format("= {0:F20}", 6.9));
}

Would be the C# equivalent of this C code:

{
    double i = 10 * 0.69;

    printf ( "%f\n", i );
    printf ( "  %.20f\n", i );
    printf ( "+ %.20f\n", 6.9 - i );
    printf ( "= %.20f\n", 6.9 );
}

However the C# produces the output:

6.9
  6.90000000000000000000
+ 0.00000000000000088818
= 6.90000000000000000000

despite i showing up equal to the value 6.89999999999999946709 (rather than 6.9) in the debugger.

compared with C which shows the precision requested by the format:

6.900000                          
  6.89999999999999946709          
+ 0.00000000000000088818          
= 6.90000000000000035527

What's going on?

( Microsoft .NET Framework Version 3.51 SP1 / Visual Studio C# 2008 Express Edition )


I have a background in numerical computing and experience implementing interval arithmetic - a technique for estimating errors due to the limits of precision in complicated numerical systems - on various platforms. To get the bounty, don't try and explain about the storage precision - in this case it's a difference of one ULP of a 64 bit double.

To get the bounty, I want to know how (or whether) .Net can format a double to the requested precision as visible in the C code.

+4  A: 

Take a look at this MSDN reference. In the notes it states that the numbers are rounded to the number of decimal places requested.

If instead you use "{0:R}" it will produce what's referred to as a "round-trip" value, take a look at this MSDN reference for more info, here's my code and the output:

double d = 10 * 0.69;
Console.WriteLine("  {0:R}", d);
Console.WriteLine("+ {0:F20}", 6.9 - d);
Console.WriteLine("= {0:F20}", 6.9);

output

  6.8999999999999995
+ 0.00000000000000088818
= 6.90000000000000000000
Timothy Walters
I don't want the round trip value, I want to know how to get the value rounded to the number of decimal places I ask for, which in this case would be 6.89999999999999946709.
Pete Kirkham
If I recall correctly the double data type is accurate to 16 digits, so the number in my sample is the most accurate you'll get, any more digits than that would require a larger data type.
Timothy Walters
Here's a quote from the MSDN documentation: By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally
Timothy Walters
I wouldn't mind if FF20 printed out 6.89999999999999950000, as there aren't any intermediate values between d and 6.9, but it doesn't - it prints out a value which is neither the requested precision, nor is the precision that is used internally, nor is the round trip value. Very odd.
Pete Kirkham
Yes, I found that odd too, I tried a few ways to get the extra 0's on the end, nothing worked... I guess I could do an old hack of padding but it feels ugly (e.g. pad n x 0 where n = requestedLength - d.ToString("R").Length ).
Timothy Walters
A: 

Use

Console.WriteLine(String.Format("  {0:G17}", i));

That will give you all the 17 digits it have. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. {0:R} will not always give you 17 digits, it will give 15 if the number can be represented with that precision.

which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision. There isn't any thing you can to do to make the the double return more digits that is the way it's implemented. If you don't like it do a new double class yourself...

.NET's double cant store any more digits than 17 so you cant see 6.89999999999999946709 in the debugger you would see 6.8999999999999995. Please provide an image to prove us wrong.

zwi
double doesn't store 17 decimal digits - it stores 52+1 binary digits. The exact representation of 1/2^52 is 52 decimal digits long, so it should be able to output 52 decimal digits before having trailing zeros. I'm trying to get around truncation to a decimal value. The value in the debugger is as I describe - the same as is output in C.
Pete Kirkham
what I meant was "17 digits of precision"
zwi
+1  A: 

The answer to this is simple and can be found on MSDN

Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.

In your example, the value of i is 6.89999999999999946709 which has the number 9 for all positions between the 3rd and the 16th digit (remember to count the integer part in the digits). When converting to string, the framework rounds the number to the 15th digit.

i     = 6.89999999999999 946709
digit =           111111 111122
        1 23456789012345 678901
Marcel Gosselin
All this is roughly true, though glosses over how IEE double works in x87 and SSE systems which are relevant to the sort of work I do - particularly if you can implement interval arithmetic on .net, where making the ULP visible matters. The question is how to format the output to get the internal value to user specified number of places, not the rounded value which truncates it.
Pete Kirkham
+1  A: 

Hi,

i tried to reproduce your findings, but when I watched 'i' in the debugger it showed up as '6.8999999999999995' not as '6.89999999999999946709' as you wrote in the question. Can you provide steps to reproduce what you saw?

To see what the debugger shows you, you can use a DoubleConverter as in the following line of code:

Console.WriteLine(TypeDescriptor.GetConverter(i).ConvertTo(i, typeof(string)));

Hope this helps!

Edit: I guess I'm more tired than I thought, of course this is the same as formatting to the roundtrip value (as mentioned before).

andyp
The value shows up in the debugger as equal to the value rendered by C formatting as "6.89999999999999946709", rather than "6.90..0" which is output by .Net's string formatting. 6.8999999999999995 is bitwise equal to 6.89999999999999946709, so differentiating between the two as a value isn't useful. The problem is that .Net formats 6.89999999999999946709 as "6.90...".
Pete Kirkham
After some sleep I understand now that this are just two representations of the same number (with a different number of decimal digits) and that your question really is only about formatting. I searched MSDN a while, but could not find a way to suppress rounding when formatting, which seemed to me to be what you want to do.
andyp
+12  A: 

The problem is that .NET will always round a double to 15 significant decimal digits before applying your formatting, regardless of the precision requested by your format and regardless of the exact decimal value of the binary number.

I'd guess that the Visual Studio debugger has its own format/display routines that directly access the internal binary number, hence the discrepancies between your C# code, your C code and the debugger.

There's nothing built-in that will allow you to access the exact decimal value of a double, or to enable you to format a double to a specific number of decimal places, but you could do this yourself by picking apart the internal binary number and rebuilding it as a string representation of the decimal value.

Alternatively, you could use Jon Skeet's DoubleConverter class (linked to from his "Binary floating point and .NET" article). This has a ToExactString method which returns the exact decimal value of a double. You could easily modify this to enable rounding of the output to a specific precision.

double i = 10 * 0.69;
Console.WriteLine(DoubleConverter.ToExactString(i));
Console.WriteLine(DoubleConverter.ToExactString(6.9 - i));
Console.WriteLine(DoubleConverter.ToExactString(6.9));

// 6.89999999999999946709294817992486059665679931640625
// 0.00000000000000088817841970012523233890533447265625
// 6.9000000000000003552713678800500929355621337890625
LukeH
The correct out for case 1 is 6.8999999999999995, both .NET and Jon Skeet get this wrong. The trailing digits are garbage digits.
leppie
No, these are not garbage digits, these are the decimal representation of the closest approximation of the binary representation of a decimal number. This is not arbitrary, just pure math based on the IEEE-754 specification and the 52 bits fraction. The number you mention is a rounded number. For instance, the middle number is equal to `1/2^50`, in other words, it's one bit off.
Abel
They're garbage in the sense that they can be thrown away without losing anything of the value - a double value represents an interval on the real number line, not an exact value, so 6.8999999999999995 is as much a correct value as the 'exact' version, as both fall in the interval. There are three different behaviours which are all correct in some way- the decimal value with the smallest number of digits which is within the interval (the roundtrip value or leppie's scheme value), the centre of the interval (the 'exact' value above) or a range half a ULP either side of the exact value.
Pete Kirkham
Do you have a reference for ".NET will always round a double to 15 significant decimal digits before applying your formatting"?
Pete Kirkham
@Pete: All the documentation seems to *suggest* that this is the case, but admittedly I can't find anything definitive. The closest I've found is probably in the "remarks" section of the `ToString` documentation. http://msdn.microsoft.com/en-us/library/kfsatb94.aspx
LukeH
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, `ToString` returns `PositiveInfinitySymbol` or `NegativeInfinitySymbol` instead of the expected number. If you require more precision, specify *format* with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
LukeH
On *"All the documentation seems to suggest that this is the case"* (Luke) and *"reference for '.NET will always round a double to 15..'"* >> see the SSCLI or my answer further in this thread: it is actually *very deliberate*.
Abel
On *"although a maximum of 17 digits is maintained internally"* >> yes and no: internally, the IEEE-754 `double` is maintained, which is almost the same, but technically rather different. But considering the level of this discussion, I'm under the impression that everybody already knows these differences.
Abel
@Luke: "R" can sometimes return up to 19 digits. But in most cases, it will add 1 garbage digit.
leppie
@Abel, @leppie: The text that I posted in the big comment isn't mine, it's just a quote taken from the MSDN docs for `Double.ToString`. Whether that documentation is accurate or not is another matter!
LukeH
@Luke: I was fooled then by it too :)
leppie
A: 

The answer is yes, double printing is broken in .NET, they are printing trailing garbage digits.

You can read how to implement it correctly here.

I have had to do the same for IronScheme.

> (* 10.0 0.69)
6.8999999999999995
> 6.89999999999999946709
6.8999999999999995
> (- 6.9 (* 10.0 0.69))
8.881784197001252e-16
> 6.9
6.9
> (- 6.9 8.881784197001252e-16)
6.8999999999999995

Note: Both C and C# has correct value, just broken printing.

Update: I am still looking for the mailing list conversation I had that lead up to this discovery.

leppie
+1  A: 

Though this question is meanwhile closed, I believe it is worth mentioning how this atrocity came into existence. In a way, you may blame the C# spec, which states that a double must have a precision of 15 or 16 digits (the result of IEEE-754). A bit further on (section 4.1.6) it's stated that implementations are allowed to use higher precision. Mind you: higher, not lower. They are even allowed to deviate from IEEE-754: expressions of the type x * y / z where x * y would yield +/-INF but would be in a valid range after dividing, do not have to result in an error. This feature makes it easier for compilers to use higher precision in architectures where that'd yield better performance.

But I promised a "reason". Here's a quote (you requested a resource in one of your recent comments) from the Shared Source CLI, in clr/src/vm/comnumber.cpp:

"In order to give numbers that are both friendly to display and round-trippable, we parse the number using 15 digits and then determine if it round trips to the same value. If it does, we convert that NUMBER to a string, otherwise we reparse using 17 digits and display that."

In other words: MS's CLI Development Team decided to be both round-trippable and show pretty values that aren't such a pain to read. Good or bad? I'd wish for an opt-in or opt-out.

The trick it does to find out this round-trippability of any given number? Conversion to a generic NUMBER structure (which has separate fields for the properties of a double) and back, and then compare whether the result is different. If it is different, the exact value is used (as in your middle value with 6.9 - i) if it is the same, the "pretty value" is used.

As you already remarked in a comment to Andyp, 6.90...00 is bitwise equal to 6.89...9467. And now you know why 0.0...8818 is used: it is bitwise different from 0.0.

This 15 digits barrier is hard-coded and can only be changed by recompiling the CLI, by using Mono or by calling Microsoft and convincing them to add an option to print full "precision" (it is not really precision, but by the lack of a better word). It's probably easier to just calculate the 52 bits precision yourself or use the library mentioned earlier.

EDIT: if you like to experiment yourself with IEE-754 floating points, consider this online tool, which shows you all relevant parts of a floating point.

Abel