views:

218

answers:

9

I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:

static void Main(string[] args)
    {
        double x = 0, y = 0;

        x += 20013.8;
        x += 20012.7;

        y += 10016.4;
        y += 30010.1;

        Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
        Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
    }

However, in this case, x and y are equal in the end.

Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.

+1  A: 

Try making it so the decimal is not .5.

Take a look at this article here

http://floating-point-gui.de/

Justen
This looked promising, but they were still equal when I made it .4
mcoolbeth
A: 

try sum VERY big and VERY small number. small one will be consumed and result will be same as large number.

Andrey
+5  A: 
double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"

Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.

Darin Dimitrov
http://codepad.org/AqxdMz8Z
jsumners
A: 

double is accurate to ~15 digits. You need more precision to really start hitting problems with only a few floating point operations.

Billy ONeal
A: 

Try performing repeated operations on an irrational number (such as a square root) or very long length repeating fraction. You'll quickly see errors accumulate. For instance, compute 1000000*Sqrt(2) vs. Sqrt(2)+Sqrt(2)+...+Sqrt(2).

Dan Bryant
A: 

The simplest I can think of right now is this:

class Test
{
    private static void Main()
    {
        double x = 0.0;

        for (int i = 0; i < 10; ++i)
            x += 0.1;

        Console.WriteLine("x = {0}, expected x = {1}, x == 1.0 is {2}", x, 1.0, x == 1.0);
        Console.WriteLine("Allowing for a small error: x == 1.0 is {0}", Math.Abs(x - 1.0) < 0.001);
    }
}
IVlad
A: 

I suggest that, if you're truly interested, you take a look any one of a number of pages that discuss floating point numbers, some in gory detail. You will soon realize that, in a computer, they're a compromise, trading off accuracy for range. If you are going to be writing programs that use them, you do need to understand their limitations and problems that can arise if you don't take care. It will be worth your time.

Larry
+2  A: 

Here's an example based on a prior question that demonstrates float arithmetic not working out exactly as you would think.

float f = (13.45f * 20);
int x = (int)f;
int y = (int)(13.45f * 20);
Console.WriteLine(x == y);

In this case, false is printed to the screen. Why? Because of where the math is performed versus where the cast to int is happening. For x, the math is performed in one statement and stored to f, then it is being cast to an integer. For y, the value of the calculation is never stored before the cast. (In x, some precision is lost between the calculation and the cast, not the case for y.)

For an explanation behind what's specifically happening in float math, see this question/answer. http://stackoverflow.com/questions/2491161/why-differs-floating-point-precision-in-c-when-separated-by-parantheses-and-when/2494724#2494724

Anthony Pegram
+2  A: 

My favourite demonstration boils down to

double d = 0.1;
d += 0.2;
d -= 0.3;

Console.WriteLine(d);

The output is not 0.

AakashM