I did this Just for kicks (so, not exactly a question, i can see the downmodding happening already) but, in lieu of Google's newfound inability to do math correctly (check it! according to google 500,000,000,000,002 - 500,000,000,000,001 = 0), i figured i'd try the following in C to run a little theory.
int main()
{
char* a = "399999999999999";
char* b = "399999999999998";
float da = atof(a);
float db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
a = "500000000000002";
b = "500000000000001";
da = atof(a);
db = atof(b);
printf("%s - %s = %f\n", a, b, da-db);
}
When you run this program, you get the following
399999999999999 - 399999999999998 = 0.000000
500000000000002 - 500000000000001 = 0.000000
It would seem like Google is using simple 32 bit floating precision (the error here), if you switch float for double in the above code, you fix the issue! Could this be it?
/mp