tags:

views:

1530

answers:

12

hi i'm using C# with visual studio...

I've a double variable called x. in the code written x gets assigned a value of 0.1 to it

and i check it in an 'if' sttment comapring x and 0.1

if(x==0.1)
{
----
}

Unfortunately it doesnot enter the if sttment :( ...

1) Should i use Double or double???

2) wat's the reason behind this?.. Can u suggest a solution for this>?

+3  A: 

Comparing floating point number can't always be done precisely because of rounding. To compare

(x == .1)

the computer really compares

(x - .1) vs 0

Result of sybtraction can not always be represeted precisely because of how floating point number are represented on the machine. Therefore you get some nonzero value and the condition evaluates to false.

To overcome this compare

Math.Abs(x- .1) vs some very small threshold ( like 1E-9)
sharptooth
Could you illustrate clearly with an example???What is the solution so that i need to change my statement??
stack_pointer is EXTINCT
+5  A: 

Use decimal. It doesn't have this "problem".

Svetlozar Angelov
A: 

floating point number representations are notoriously inaccurate (because of the way floats are stored internally) e.g. x may actually be 0.0999999999 or 0.100000001 and your condition will fail. If you want to determine if floats are equal you need to specific whether they're equal to within a certain tolerance.

i.e.

if(x - 0.1 < tol)
Massif
And throw in a Math.Abs, in case x is a bit smaller than 0.1. Your code will accept x== -10.
Hans Kesting
+11  A: 

It's a standard problem due to how the computer stores floating point values. Search here for "floating point problem" and you'll find tons of information.

In short - a float/double can't store 0.1 precisely. It will always be a little off.

You can try using the decimal type which stores numbers in decimal notation. Thus 0.1 will be representable precisely.


You wanted to know the reason:

Float/double are stored as binary fractions, not decimal fractions. To illustrate:

12.34 in decimal notation (what we use) means 1*101+2*100+3*10-1+4*10-2. The computer stores floating point numbers in the same way, except it uses base 2: 10.01 means 1*21+0*20+0*2-1+1*2-2

Now, you probably know that there are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333... The same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100...

Since the binary notation cannot store it precisely, it is stored in a rounded-off way. Hence your problem.

Vilx-
+1  A: 

Double and double are identical.

For the reason, see http://www.yoda.arachsys.com/csharp/floatingpoint.html . In short: a double is not an exact type and a minute difference between "x" and "0.1" will throw it off.

Hans Kesting
+2  A: 

Exact comparison of floating point values is know to not always work due to the rounding and internal representation issue.

Try imprecise comparison:

if (x >= 0.099 && x <= 0.101)
{
}

The other alternative is to use the decimal data type.

Developer Art
+1  A: 

double and Double are the same. double is an alias that can be used in C# and the two can be used as you wish.

The problem with comparing a double with another value is that doubles are approximate values, not exact values. So when you set x to 0.1 it may in reality be stored as 0.100000001 or something like that. So instead of checing for equality you should check that the difference is less than a defined minimum difference. Something like if (x - 0.1 < 0.0000001)

Rune Grimstad
A: 

1) Should i use Double or double???

Double and double is the same thing. double is just a C# keyword working as alias for the class System.Double The most common thing is to use the aliases! The same for string (System.String), int(System.Int32)

Also see Built-In Types Table (C# Reference)

Lars Corneliussen
A: 

Double (called float in some languages) is fraut with problems due to rounding issues, it's good only if you need approximate values.

The Decimal data type does what you want.

For reference decimal and Decimal are the same in .NET C#, as are the double and Double types, they both refer to the same type (decimal and double are very different though, as you've seen).

Beware that the Decimal data type has some costs associated with it, so use it with caution if you're looking at loops etc.

Timothy Walters
+3  A: 

From the documentation:

Precision in Comparisons The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .3333 and the Double returned by dividing 1 by 3 are unequal.

...

Rather than comparing for equality, one recommended technique involves defining an acceptable margin of difference between two values (such as .01% of one of the values). If the absolute value of the difference between the two values is less than or equal to that margin, the difference is likely to be due to differences in precision and, therefore, the values are likely to be equal. The following example uses this technique to compare .33333 and 1/3, the two Double values that the previous code example found to be unequal.

So if you really need a double, you should use the techique described on the documentation. If you can, change it to a decimal. It' will be slower, but you won't have this type of problem.

Alfred Myers
A: 

As a general rule:

Double representation is good enough in most cases but can miserably fail in some situations. Use decimal values if you need complete precision (as in financial applications).

Most problems with doubles doesn't come from direct comparison, it use to be a result of the accumulation of several math operations which exponentially disturb the value due to rounding and fractional errors (especially with multiplications and divisions).

Check your logic, if the code is:

x = 0.1

if (x == 0.1)

it should not fail, it's to simple to fail, if X value is calculated by more complex means or operations it's quite possible the ToString method used by the debugger is using an smart rounding, maybe you can do the same (if that's too risky go back to using decimal):

if (x.ToString() == "0.1")
Jorge Córdoba
A: 

A comprehensive reading about the subject:

What every computer scientist should know about floating-point arithmetic

Serge - appTranslator