Floating-point numbers cannot precisely represent all decimal numbers within their range. For example, 0.9 is not exactly 0.9, it's a number really close to 0.9 that winds up being printed as it in most cases. As you do floating-point calculations, these errors can accumulate and you wind up with something very close to the right number but not exactly equal to it. For example, 0.3 * 3 == 0.9
will return false
. This is the case in every computer language you will ever use — it's just how binary floating-point math works. See, for example, this question about Haskell.
To test for floating point equality, you generally want to test whether the number is within some tiny range of the target. So, for example:
def float_equal(a, b)
if a + 0.00001 > b and a - 0.00001 < b
true
else
false
end
end
You can also use the BigDecimal class in Ruby to represent arbitrary decimal numbers.
If this is a test case, you can use assert_in_delta
:
def test_some_object_total_is_calculated_correctly
assert_in_delta 22.23, some_object.total, 0.01
end