System.out.println((26.55f/3f));
or
System.out.println((float)( (float)26.55 / (float)3.0 ));
etc.
returns the result 8.849999. not 8.85 as it should.
Can anyone explain this or should we all avoid using floats?
System.out.println((26.55f/3f));
or
System.out.println((float)( (float)26.55 / (float)3.0 ));
etc.
returns the result 8.849999. not 8.85 as it should.
Can anyone explain this or should we all avoid using floats?
Well, we should all avoid using floats wherever realistic, but that's a story for another day.
The issue is that floating point numbers cannot exactly represent most numbers we think of as trivial in presentation. 8.850000 probably cannot be represented exactly by a float; and possibly not by a double either. This is because they aren't actually decimal numbers; but a binary representation.
Take a look at Wikipedia's article on Floating Point, specifically the Accuracy Problems section.
The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
The article features a couple examples that should provide more clarity.
Explaining is easy: floating point is a binary format and so can only represent exactly values that are an integer multiple of 1.0 / (2 to the Nth power)
for some natural integer N
. 26.55
does not have this property, therefore it cannot be represented exactly.
If you need exact representation (e.g. your code is about accounting and money, where every fraction of a cent matters), then you must indeed avoid floats in favor of other types that do guarantee exact representation of the values you need (depending on your application, for example, just doing all accounting in terms of integer numbers of cents might suffice). Floats (when used appropriately and advisedly!-) are perfectly fine for engineering and scientific computations, where the input values are never "infinitely precise" in any case and therefore the computationally cumbersome burden of exact representation is absolutely not worth carrying.
What Every Programmer Should Know About Floating-Point Arithmetic:
Q: Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
A: Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
In-depth explanations at the linked-to site