views:

79

answers:

5

If I compare two floating-point numbers, are there cases where a>=b is not equivalent to b<=a and !(a<b), or where a==b is not equivalent to b==a and !(a!=b)?

A: 

Nope, not for any sane floating point implementation: basic symmetry and boolean logic applies. However, equality in floating point numbers is tricky in other ways. There are very few cases where testing a==b for floats is the reasonable thing to do.

Pontus Gagge
+2  A: 

In Python at least a>=b is not equivalent to !(a<b) when there is a NaN involved:

>>> a = float('nan')
>>> b = 0
>>> a >= b
False
>>> not (a < b)
True

I would imagine that this is also the case in most other languages.

Another thing that might surprise you is that NaN doesn't even compare equal to itself:

>>> a == a
False
Mark Byers
+1: Any system that uses IEEE floating point will have this odd behavior (or it will throw exceptions at you for producing such an erroneous value; after all, it indicates where arithmetic has gone badly wrong, such as the result of dividing 0.0 by 0.0).
Donal Fellows
+2  A: 

The set of IEEE-754 floating-point numbers are not ordered so some relational and boolean algebra you are familiar with no longer holds. This anomaly is caused by NaN which has no ordering with respect to any other value in the set including itself so all relational operators return false. This is exactly what Mark Byers has shown.

If you exclude NaN then you now have an ordered set and the expressions you provided will always be equivalent. This includes the infinities and negative zero.

deadc0de
+1  A: 

Aside from the NaN issue, which is somewhat analogous to NULL in SQL and missing values in SAS and other statistical packages, there is always the problem of floating point arithmetic accuracy. Repeating values in the fractional part (1/3, for example) and irrational numbers cannot be represented accurately. Floating point arithmatic often truncates results because of the finite limit in precision. The more arithematic you do with a floating point value, the larger the error that creeps in.

Probably the most useful way to compare floating point values would be with an algorithm:

  1. If either value is NaN, all comparisons are false, unless you are explicitly checking for NaN.
  2. If the difference between two numbers is within a certain "fuzz factor", consider them equal. The fuzz factor is your tolerance for accumulated mathematical imprecision.
  3. After the fuzzy equality comparison, then compare for less than or greater than.

Note that comparing for "<=" or ">=" has the same risk as comparison for precise equality.

Cylon Cat
+1  A: 

Assuming IEEE-754 floating-point:

  • a >= b is always equivalent to b <= a.*
  • a >= b is equivalent to !(a < b), unless one or both of a or b is NaN.
  • a == b is always equivalent to b == a.*
  • a == b is equivalent to !(a != b), unless one or both of a or b is NaN.

More generally: trichotomy does not hold for floating-point numbers. Instead, a related property holds [IEEE-754 (1985) §5.7]:

Four mutually exclusive relations are possible: less than, equal, greater than, and unordered. The last case arises when at least one operand is NaN. Every NaN shall compare unordered with everything, including itself.

Note that this is not really an "anomaly" so much as a consequence of extending the arithmetic to be closed in a way that attempts to maintain consistency with real arithmetic when possible.

[*] true in abstract IEEE-754 arithmetic. In real usage, some compilers might cause this to be violated in rare cases as a result of doing computations with extended precision (MSVC, I'm looking at you). Now that most floating-point computation on the Intel architecture is done on SSE instead of x87, this is less of a concern (and it was always a bug from the standpoint of IEEE-754, anyway).

Stephen Canon