I just can not figure out the following code.
int d = 5;
float f = 3.8f;
int ret = d*f;
The ret is 18, not 19 as I expected. Why?
I just can not figure out the following code.
int d = 5;
float f = 3.8f;
int ret = d*f;
The ret is 18, not 19 as I expected. Why?
There are two issues here. The first is that floating point values are not decimals, so 3.8f
is really more like 3.79999999999999999999923
or something like that. The second is that when converting to an int, the system will always truncate the decimal value, rather than rounding.
So if you could peer into the processor, you'd see it doing
3.79999999999999999999923 * 5 = 18.999999999999999999615
Then you remove the non-integer portion:
18
You're running into IEEE 754 limitations. f
is not exactly 3.8
, it's just ever the slightest bit less.
When multiplying integers and floating point numbers together, if you don't explicitly cast one to another, you are relying on the implicit conversion rules of the language you are using to do the rounding for you. Some languages will truncate the decimal whereas some languages round the decimal value in order to make an int. In the case of multiplying two numeric types together, if you don't know what to expect, it's always better to explicitly cast and round so you know exactly what you are getting.