views:

120

answers:

4

Hello everyone,

sorry if dumb but could not find an answer.

#include <iostream>

using namespace std;

int main()
{
double a(0);
double b(0.001);
cout << a - 0.0 << endl;
for (;a<1.0;a+=b);
cout << a - 1.0 << endl;
for (;a<10.0;a+=b);
cout << a - 10.0 << endl;
cout << a - 10.0-b << endl;
return 0;
}

Output:
0
6.66134e-16
0.001
-1.03583e-13

Tried compiling it with MSVC9, MSVC10, Borland C++ 2010. All of them arrive in the end to the error of about 1e-13. Is it normal to have such a significant error accumulation over only a 1000, 10000 increments?

+10  A: 

Yes, this is normal numeric representation floating point error. It has to do with the fact that the hardware must approximate most floating point numbers, rather than storing them exactly. Thus, the compiler you use should not matter.

What Every Computer Scientist Should Know About Floating-Point Arithmetic

WhirlWind
Thanks, I do understand truncation error and all the other crap. Done a lot of theoretical work on numerical methods but I just never checked it myself and was very surprised to find out how large it is...
Andrew
Well, a double gives you roughly 16 decimal digits of precision, that that's about right. When you loop 1000 times, you're down to 13 or so digits of precision.
WhirlWind
Mind that it's particularly painful only with irrational (in base 2) numbers. If you use powers of two you'll be fine. E.g. if your adder was 0.5, 0.25, 0.125, 0.0625, etc., they will eventually add up *exactly* to 1.0, as these values in base 2 are 0.1, 0.01, 0.001, and 0.0001.
dash-tom-bang
@dash-tom-bang Good point, though I'd argue if you're doing that sort of thing, you might be better off fitting into integer arithmetic.
WhirlWind
@ WhirlWind: somewhere deep in my heart I was expecting it to accumulate slower than linearly due to randomness of the error nature but now I realize that was just a fantasy :)
Andrew
Well if you've got a fixed number of decimal places, integers will always win. E.g. don't compute US Dollars in floating point, represent it in integers where 1 is one cent or maybe some fraction of a cent.
dash-tom-bang
+1  A: 
Joey Adams
+2  A: 

This is why when using a floating point error you should never do:

if( foo == 0.0 ){
    //code here
}

and instead do

bool checkFloat(float _input, float _compare, float _epsilon){
    return ( _input + _epsilon > _compare ) && ( _input - _epsilon < _compare );
}
wheaties
Checking against zero is fine in many circumstances as it can be exactly represented in the hardware, but of course it depends on what you're doing with it... Starting at 1 and subtracting .1 ten times will not lead to 0, of course.
dash-tom-bang
That's very true but in most cases it's just not good practice. I like erring on the side of caution.
wheaties
erring on the side of caution... funny ;) Or perhaps erring under the size of epsilon.
WhirlWind
+2  A: 

think about this. every operation introduces slight error, but next operation uses slightly faulty result. given enough iterations, you will deviate from true result. if you like, write your expressions in the form t0 = (t + y + e), t1 = (t0 + y +e) and figure out terms with epsilon. from their terms you can estimate approximate error.

there is also second source of error: at some point you are combining relatively small and relatively large numbers, towards the end. if you recall definition of machine precision, 1 + e = 1, at some point operations will be losing significant bits.

Hopefully this helps to clarify in laymen terms

aaa