tags:

views:

3235

answers:

7

Do we need to be careful when comparing a double value against zero?

if ( someAmount <= 0){
.....
}
A: 

Short answer is yes. For longer answer, Google for double comparison.

Hemal Pandya
A: 

See BigDecimal.

KG
This is not completely true. There are definitely more operations that can cause the value to not be exact when using floating point calculations. For instance 0.3-0.2-0.1 will not evaluate to exactly 0, but rather to -0.000000000000000027755575615628914 which in turn is != 0
norheim.se
Pretty much *any* operation in floating point can give confusing results. Addition and subtraction can certainly do it - particularly if the numbers being added (or subtracted) are at very different extremes, e.g. 1e10 + 1e-10.
Jon Skeet
@norheim: In your particular examples, the problem isn't in the subtraction operation - it's in the conversion of "0.3" (etc) to doubles to start with. If you look at the *exact* values being subtracted, I think you'll see exact subtraction.
Jon Skeet
Then what you recommend? Say, that someAmount was given as a string and was stored in double using Double.parseDouble?
I believe the typical approach is to use some tolerance value and test for difference being smaller then that.
Hemal Pandya
Use BigDecimal, if its appropriate.
Adeel Ansari
Agreed. BigDecimal would definitely be necessary for any computations.
KG
@KG I disagree. http://quantdev.blog.co.uk/2009/09/08/the-myth-of-bigdecimal-6919916/
quant_dev
+3  A: 

Depending on how your someAmount is computed, you may expect some odd behaviour with float/doubles

Basically, converting numeric data to their binary representation using float / doubles is error prone, because some numbers cannot be represented with a mantis/exponent.

For some details about this you can read this small article

You should consider using java.lang.Math.signum or java.math.BigDecimal , especially for currency & tax computing

WiseTechi
Interesting article, but seemed a bit dated. I liked this quote: "In most computers, floating point arithmetic is usually much slower than integer arithmetic, though on the Intel Pentium it is usually faster because the integer unit was not given the same care as the floating point unit." a lot. :)
unwind
@unwind: but that's pretty much the only part that's no longer relevant. Everything except the performance is unchanged since then.
Joachim Sauer
+4  A: 

If you want to be really careful you can test whether it is within some epsilon of zero with something like

double epsilon = 0.0000001;
if      ( f <= ( 0 - epsilon ) ) { .. }
else if ( f >= ( 0 + epsilon ) ) { .. }
else { /* f "equals" zero */ }

Or you can simply round your doubles to some specified precision before branching on them.

For some interesting details about comparing error in floating point numbers, here is an article by Bruce Dawson.

Crashworks
+1  A: 

Watch out for auto-unboxing:

Double someAmount = null;
if ( someAmount <= 0){

Boom, NullPointerException.

Thilo
+2  A: 

For equality: (i.e. == or !=) yes.

For the other comparative operators (<, >, <=, >=), it depends whether you are about the edge cases, e.g. whether < is equivalent to <=, which is another case of equality. If you don't care about the edge cases, it usually doesn't matter, though it depends where your input numbers come from and how they are used.

If you are expecting (3.0/10.0) <= 0.3 to evaluate as true (it may not if floating point error causes 3.0/10.0 to evaluate to a number slightly greater than 0.3 like 0.300000000001), and your program will behave badly if it evaluates as false -- that's an edge case, and you need to be careful.

Good numerical algorithms should almost never depend on equality and edge cases. If I have an algorithm which takes as an input 'x' which is any number between 0 and 1, in general it shouldn't matter whether 0 < x < 1 or 0 <= x <= 1. There are exceptions, though: you have to be careful when evaluating functions with branch points or singularities.

If I have an intermediate quantity y and I am expecting y >= 0, and I evaluate sqrt(y), then I have to be certain that floating-point errors do not cause y to be a very small negative number and the sqrt() function to throw an error. (Assuming this is a situation where complex numbers are not involved.) If I'm not sure about the numerical error, I would probably evaluate sqrt(max(y,0)) instead.

For expressions like 1/y or log(y), in a practical sense it doesn't matter whether y is zero (in which case you get a singularity error) or y is a number very near zero (in which case you'll get a very large number out, whose magnitude is very sensitive to the value of y) -- both cases are "bad" from a numerical standpoint, and I need to reevaluate what it is I'm trying to do, and what behavior I'm looking for when y values are in the neighborhood of zero.

Jason S
A: 

If you don't care about the edge cases, then just test for someAmount <= 0. It makes the intent of the code clear. If you do care, well... it depends on how you calculate someAmount and why you're testing for the inequality.

quant_dev