Floating point value is inexact.
This is pretty much the answer to the question. There is finite precision, which means that some numbers can not be represented exactly.
Some languages support arbitrary precision numeric types/rational/complex numbers at the language level, etc, but not Javascript. Neither does C nor Java.
The IEEE 754 standard floating point value can not represent e.g. 0.1
exactly. This is why numerical calculations with cents etc must be done very carefully. Sometimes the solution is to store values in cents as integers instead of in dollars as floating point values.
"Floating" point concept, analog in base 10
To see why floating point values are imprecise, consider the following analog:
- You only have enough memory to remember 5 digits
- You want to be able to represent values in as wide range as practically possible
In representing integers, you can represent values in the range of -99999
to +99999
. Values outside of those range would require you to remember more than 5 digits, which (for the sake of this example) you can't do.
Now you may consider a fixed-point representation, something like abc.de
. Now you can represent values in the range of -999.99
to +999.99
, up to 2 digits of precision, e.g. 3.14
, -456.78
, etc.
Now consider a floating point version. In your resourcefulness, you came up with the following scheme:
n = abc x 10
de
Now you can still remember only 5 digits a
, b
, c
, d
, e
, but you can now represent much wider range of numbers, even non-integers. For example:
123 x 10
0
= 123.0
123 x 10
3
= 123,000.0
123 x 10
6
= 123,000,000.0
123 x 10
-3
= 0.123
123 x 10
-6
= 0.000123
This is how the name "floating point" came into being: the decimal point "floats around" in the above examples.
Now you can represent a wide range of numbers, but note that you can't represent 0.1234
. Neither can you represent 123,001.0
. In fact, there's a lot of values that you can't represent.
This is pretty much why floating point values are inexact. They can represent a wide range of values, but since you are limited to a fixed amount of memory, you must sacrifice precision for magnitude.
More technicalities
The abc
is called the significand, aka coefficient/mantissa. The de
is the exponent, aka scale/characteristics. As usual, the computer uses base 2 instead 10. In addition to remembering the "digits" (bits, really), it must also remember the signs of the significand and exponent.
A single precision floating point type usually uses 32 bits. A double precision usually uses 64 bits.