views:

389

answers:

3

I have a situation where I need to find out how many times an int goes into a decimal, but in certain cases, I'm losing precision. Here is the method:

public int test(double decimalAmount, int divisor) {
  return (int) (decimalAmount/ (1d / divisor));
}

The problem with this is if I pass in 1.2 as the decimal amount and 5 as the divisor, I get 5 instead of 6. How can I restrusture this so I know how many times 5 goes into the decimal amount as an int?

+3  A: 
public int test(double decimalAmount, int divisor) {
  return (int) (divisor * decimalAmount);
}

Obviously, this is just multiplying then truncating. Why you think you need this method?

Matthew Flaschen
+3  A: 

The result of the computation is likely something like 5.999999999994, as the floating-point result is not necessarily exactly an integer. When you cast to an int, you truncate, and the result is 5. For another similar set of arguments, you might see a result of 6.0000000000002, and it would give you 6.

But the fact that this is so sensitive to floating-point representation calls into question why you think you want this -- maybe there's a better way to do what you wish to do. For example the answer above is correct, this simplifies, and then it's even less clear what the point is.

To really do what I think you imagine you want to do, you need to not use doubles or floats. You need java.util.BigDecimal -- look up how it works. You can use it to "exactly" multiply 5 and 1.2 to get 6.

Sean Owen
+1  A: 

Floating point operations are not correct, they are approximations. By this operaton you could get for instance 5.9 as result. Casting it to int result into 5 (instead of 6). Try rounding instead of truncating:

public int test(double decimalAmount, int divisor) {
  return Math.round(divisor * decimalAmount);
}
Mnementh