Floating-point arithmetic arose because it is the only way to operate on a large range of non-integer numbers with a reasonable amount of hardware cost. Infinite-precision arithmetic is implemented in several languages (Python, LISP, etc..) and libraries (Java's BigNum, GMP, etc..), and is an alternative for folks who need more accuracy (e.g. the finance industry). For most of the rest of us, who deal with medium-size numbers, floats
or certainly doubles
are more than accurate enough. The two different floating-point datatypes (corresponding to IEEE 754 single- and double-precision, respectively) because a single-precision floating-point unit has much better area, power, and speed properties than a double-precision unit, and so hardware designers and programmers should make appropriate tradeoffs to exploit these different properties.