tags:

views:

336

answers:

9

Possible Duplicate:
Why is floating point arithmetic in C# imprecise?

I have been dealing with some numbers and C#, and the following line of code results in a different number than one would expect:

double num = (3600.2 - 3600.0);

I expected num to be 0.2, however, it turned out to be 0.1999999999998181. Is there any reason why it is producing a close, but still different decimal?

+2  A: 

This is because double is a floating point datatype.

If you want greater accuracy you could switch to using decimal instead.

The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as

var num = (3600.2m - 3600.0m);

Note that there are disavdantages to using a decimal. It is a 128 bit datatype as opposed to 64 bit which is the size of a double. This makes it more expensive both in terms of memory and processing. It also has a much smaller range than double.

AdamRalph
This has nothing with how much precision you have(unless you have infinite precision of course). It is the conversion from one base to another which creates this.
AraK
When will the misinformtaion about IEEE 754 types stop? It is _not_ an imprecise type! It is an exact type, but it can only represent a limited range of numbers. All numbers not represented exactly are approximated, and this is the cause of errors. If you want to express only powers of two, within the range of the type, you will never lose accuracy with a floating point.
codekaizen
@AdamRalph - that is untrue about Decimal, as well. System.Decimal is a floating point type, but it is in base 10, so usual base 10 arithmatic applies. Try computing with 1/3, however, and Decimal will lose accuracy, although with the 96 bit mantissa, it will be a much smaller loss than System.Double.
codekaizen
fine I'll take out 'imprecise' from the answer. it's the effect that I was demonstrating rather than the underlying cause
AdamRalph
@codekaizen - admittedly I haven't examined this in fine detail, but I'm not sure about your assertion that System.Decimal is a floating point type. from the first line of the type's MSDN entry -"The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range" - this is implying that it is NOT a floating point type
AdamRalph
@AdamRalph - I don't know how many times I've had this argument with other devs. ;) You need to read further on... "A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value." http://msdn.microsoft.com/en-us/library/system.decimal.aspx
codekaizen
Note that the confusion on Decimal is usually around the base... I was confused about this too, until I read the docs for the 3rd time. System.Single and System.Double are _binary_ (base 2) floating point types, and System.Decimal is a _decimal_ (base 10) floating point type. This makes computations of powers of 10 exact with Decimal, where they are approximate with Single and Double.
codekaizen
@AdamRalph, codekaizan is absolutely correct. A decimal has a .... decimal point, and as the position of this decimal point can move (from msdn link above "The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28"... it is a floating decimal.
Charles Bretana
OK I understand what you are saying and I accept that Decimal is a floating point datatype. It seems that the line i quoted from MSDN is enormously misleading. However, my answer (whilst not necessarily being the best answer) still holds - I don't believe it is stating anything false. In it's current form I don't see why anyone should feel the need to downvote the answer.
AdamRalph
@AdamRalph - well, initially, it was very misleading, and you've edited it quite a bit. There are still some problems with it currently, however, in that Decimal being 128 bit and Double being 64 is the reason that Decimal is better. Sure, you get more precision with the Decimal type, but it isn't a perfect solution (you can still have rounding error), and it isn't strictly because of the number of bits (the size of the significand matters). You could still have catastrophic error from subtraction, but the way you present Decimal as the best solution doesn't contain any warning on this.
codekaizen
@AdamRalph - I do agree that the MSDN entry for the type is misleading. It should say "The decimal keyword denots a 128-bit **base-10 floating point** data type. Compared to **IEEE 754 binary** floating-point types, the decimal type has a greater precision and a smaller range, **but still can suffer from the same round-off errors and catastrophic cancellation when subtracting nearly equal numbers as its binary counterparts.**
codekaizen
@codekaizen - I mentioned that Decimal is 128 bit as opposed to 64 bit as a *disadvantage* when compared to a Double, rather than a reason for it being a better choice. I'll try and make this clearer.
AdamRalph
@AdamRalph - but it isn't a strict disadvantage, that's the point. And it isn't necessarily an advantage either. True, in this specific case, it makes the type more precise, since it has 96 bits of significand, but it could have had 124 bits of exponent and 4 bits of significand, and then even though it would have 128 bits it wouldn't be better than Double. Unilaterally declaring that 128 bits is better, or even that more precision is better (it doesn't strictly imply more accuracy), misses the nuances, and perpetuates the misunderstanding, of floating point types, which is why I downvoted.
codekaizen
A: 

Here's a good summary from MSDN (Why Floating-Point Numbers May Lose Precision).

Taylor Leese
A: 

See Wikipedia

Can't explain it better. I can also suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions on StackOverflow.

Meinersbur
+9  A: 

Whenever this comes up, i always suggest "What Every Computer Scientist Should Know About Floating-Point Arithmetic", i haven't read it myself and nobody who i recomend it to bothers either, but nevertheless, you should read it :D

Paul Creasey
Software engineers as well as computer scientists should understand this... perhaps even more so.
codekaizen
Oh, and I've read it. It's quite good. You should read it. :D
codekaizen
It's in my favorites at work, i plan to read it, just need more time!
Paul Creasey
A: 

Change your type to decimal:

decimal num = (3600.2m - 3600.0m);

You should also read this.

Fernando
+1  A: 

Check out the following post: http://stackoverflow.com/questions/753948/why-is-floating-point-arithmetic-in-c-imprecise

Carra
+2  A: 

For yet another article on this, refer to Jon Skeet's timeless Binary floating point and .NET.

womp
+2  A: 

Eric Lippert has some very good, if heavily technical, articles on the subject of floating point precision - http://blogs.msdn.com/ericlippert/archive/tags/Floating+Point+Arithmetic/default.aspx And he knows a thing or two about C#

Dan Diplo
A: 

There is a reason.

The reason is, that the way the number is stored in memory, in case of the double data type, doesn't allow for an exact representation of the number 3600.2. It also doesn't allow for an exact representation of the number 0.2.

0.2 has an infinite representation in binary. If You want to store it in memory or processor registers, to perform some calculations, some number close to 0.2 with finite representation is stored instead. It may not be apparent if You run code like this.

double num = (0.2 - 0.0);

This is because in this case, all binary digits available for representing numbers in double data type are used to represent the fractional part of the number (there is only the fractional part) and the precision is higher. If You store the number 3600.2 in an object of type double, some digits are used to represent the integer part - 3600 and there is less digits representing fractional part. The precision is lower and fractional part that is in fact stored in memory differs from 0.2 enough, that it becomes apparent after conversion from double to string

Maciej Hehl