Hello! I wonder how calculators work with precision. For example the value of sin(M_PI)
is not exactly zero when computed in double
precision:
#include <math.h>
#include <stdio.h>
int main() {
double x = sin(M_PI);
printf("%.20f\n", x); // 0.00000000000000012246
return 0;
}
Now I would certainly want to print zero when user enters sin(π). I can easily round somewhere on 1e–15 to make this particular case work, but that’s a hack, not a solution. When I start to round like this and the user enters something like 1e–20, they get a zero back (because of the rounding). The same thing happens when the user enters 1/10 and hits the = key repeatedly — when he reaches the rounding treshold, he gets zero.
And yet some calculators return plain zero for sin(π) and at the same time they can work with expressions such as (1e–20)/10 comfortably. Where’s the trick?