Why is this?
Because floating-point numbers are stored in binary, in which 0.3 is 0.01001100110011001... repeating just like 1/3 is 0.333333... is repeating in decimal. When you write 0.3, you actually get 0.299999999999999988897769753748434595763683319091796875.
Keep in mind that for the applications for which floating-point is designed, it's not a problem that you can't represent 0.3 exactly. Floating-point was designed to be used with:
- Physical measurements, which are often measured to only 4 sig figs and never to more than 15.
- Transcendental functions like logarithms and the trig functions, which are only approximated anyway.
For which binary-decimal conversions are pretty much irrelevant compared to other sources of error.
Now, if you're writing financial software, for which $0.30 means exactly $0.30, it's different. There are decimal arithmetic classes designed for this situation.
And how to get correct result in this case?
Limiting the precision to 15 significant digits is usually enough to hide the "noise" digits. Unless you actually need an exact answer, this is usually the best approach.