0.1 + 0.2 == 0.3
// returns false
0.1 + 0.2
// returns 0.30000000000000004
Any ideas why this happens?
0.1 + 0.2 == 0.3
// returns false
0.1 + 0.2
// returns 0.30000000000000004
Any ideas why this happens?
All floating point math is like this and is based on the IEEE standard.
You need to never compare with == but instead compare the absolute value of their differences, and make sure that this difference is smaller than the Epsilon value, which is a very very small number.
x = 0.2;
y = 0.3;
equal = (Math.abs(x - y) < 0.000001)
For the exact reason why, please read this paper.
Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Just as 1/3 takes an infinite number of digits to represent in decimal, but is "0.1" in base-3, 0.1 takes an infinite number of digits in base-2 where it does not in base-10. And computers don't have an infinite amount of memory.
Floating point variables typically have this behaviour. It's caused by how they are stored in hardware.
For more info check out the Wikipedia article on floating point numbers.
JavaScript treats decimals as floating point numbers, which means operations like addition might be subject to rounding error.
You might want to take a look at this article: What Every Computer Scientist Should Know About Floating-Point Arithmetic
Floating point rounding error. From http://docs.sun.com/source/806-3568/ncg_goldberg.html:
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.
When you convert .1 or 1/10 to base 2 (binary) you get a repeating pattern after the decimal point, just like trying to represent 1/3 in base 10. The value is not exact, and therefore you can't do exact math with it using normal floating point methods.
Try rounding it off using toFixed().
(0.1 + 0.2).toFixed(1) == 0.3
Don't forget the comp.lang.javascript FAQ which covers this and many other questions.
Yes, it's 'broken', and is proposed to be fixed in the next version with support for decimal numeric values.
In addition to the other correct answers, you may want to consider scaling your values to avoid problems with floating-point arithmetic.
For example:
var result = 1.0 + 2.0; // result === 3.0 returns true
... instead of:
var result = 0.1 + 0.2; // result === 0.3 returns false
The expression 0.1 + 0.2 === 0.3
returns false
in JavaScript, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling.
As a practical example, to avoid floating-point problems where accuracy is paramount, it is recommended1 to handle money as an integer representing the number of cents: 2550
cents instead of 25.50
dollars.
1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).
All numbers in JavaScript are represented in binary as IEEE-754 Doubles, which provides an accuracy to about 14 or 15 significant digits. Because they are floating point numbers, they do not always exactly represent real numbers, including fractions.
I asked this question myself with different wording bc I didn't know to look for "floating-point". I would suggest tagging it with "decimals" and "fractions" "addition subtraction". If I search for "javascript adding decimals inaccurate", the search mechanism isn't smart enough to equate "decimals" with "floating-point" and "adding" with "math".
A solution to tidy up the unsightly overflow
function strip(number) {
return (parseFloat(number.toPrecision(12)));
}
Using 'toPrecision(12)' leaves trailing zeros which 'parseFloat()' removes. Assume it is accurate to plus/minus one on the least significant digit.