Do we need to be careful when comparing a double value against zero?
if ( someAmount <= 0){
.....
}
Do we need to be careful when comparing a double value against zero?
if ( someAmount <= 0){
.....
}
Short answer is yes. For longer answer, Google for double comparison.
Depending on how your someAmount
is computed, you may expect some odd behaviour with float/doubles
Basically, converting numeric data to their binary representation using float / doubles is error prone, because some numbers cannot be represented with a mantis/exponent.
For some details about this you can read this small article
You should consider using java.lang.Math.signum
or java.math.BigDecimal
, especially for currency & tax computing
If you want to be really careful you can test whether it is within some epsilon of zero with something like
double epsilon = 0.0000001;
if ( f <= ( 0 - epsilon ) ) { .. }
else if ( f >= ( 0 + epsilon ) ) { .. }
else { /* f "equals" zero */ }
Or you can simply round your doubles to some specified precision before branching on them.
For some interesting details about comparing error in floating point numbers, here is an article by Bruce Dawson.
Watch out for auto-unboxing:
Double someAmount = null;
if ( someAmount <= 0){
Boom, NullPointerException.
For equality: (i.e. ==
or !=
) yes.
For the other comparative operators (<
, >
, <=
, >=
), it depends whether you are about the edge cases, e.g. whether <
is equivalent to <=
, which is another case of equality. If you don't care about the edge cases, it usually doesn't matter, though it depends where your input numbers come from and how they are used.
If you are expecting (3.0/10.0) <= 0.3
to evaluate as true
(it may not if floating point error causes 3.0/10.0 to evaluate to a number slightly greater than 0.3 like 0.300000000001), and your program will behave badly if it evaluates as false
-- that's an edge case, and you need to be careful.
Good numerical algorithms should almost never depend on equality and edge cases. If I have an algorithm which takes as an input 'x
' which is any number between 0 and 1, in general it shouldn't matter whether 0 < x < 1
or 0 <= x <= 1
. There are exceptions, though: you have to be careful when evaluating functions with branch points or singularities.
If I have an intermediate quantity y
and I am expecting y >= 0
, and I evaluate sqrt(y)
, then I have to be certain that floating-point errors do not cause y to be a very small negative number and the sqrt()
function to throw an error. (Assuming this is a situation where complex numbers are not involved.) If I'm not sure about the numerical error, I would probably evaluate sqrt(max(y,0))
instead.
For expressions like 1/y
or log(y)
, in a practical sense it doesn't matter whether y is zero (in which case you get a singularity error) or y is a number very near zero (in which case you'll get a very large number out, whose magnitude is very sensitive to the value of y) -- both cases are "bad" from a numerical standpoint, and I need to reevaluate what it is I'm trying to do, and what behavior I'm looking for when y
values are in the neighborhood of zero.
If you don't care about the edge cases, then just test for someAmount <= 0
. It makes the intent of the code clear. If you do care, well... it depends on how you calculate someAmount
and why you're testing for the inequality.