views:

181

answers:

5

Hello everyone,

I've stumbled onto a very strange bug. Read the comments in the code to see what the bug is exactly, but essentially a variable modulo 1 is returning 1 (but it doesn't equal 1!). I'm assuming there is a display problem where the float is extremely close to one but not exactly. However, it should be moduloing to zero. I can't test for this case easily because (last % 1) != 1.0 ! When I try plugging the same numbers into another python terminal, everything behaves correctly. What's going on?

def r(k,i,p):
    first = i*p
    last = first + p

    steps = int((i+1)*p) - int(i*p)
    if steps < 1:
        return p
    elif steps >= 1:
        if k == 0:
            return 1 - (first % 1)
        elif k == steps:
            if i == 189:
                print last, 1, type(last), last % 1, last - int(last)
                # Prints: 73.0 1 <type 'float'> 1.0 1.0
                print last % 1 == 1 # Returns False
            if last % 1 == 1.0:
                return 0
            return (last % 1)
        else:
            return 1
+5  A: 

Welcome to IEEE754, enjoy your stay.

Ignacio Vazquez-Abrams
Do you think subtracting a machine epsilon would help? The fact that python is displaying this value as 1.0 is very very unhelpful. I can deal with imprecision, but I need to know exactly which way it's ending up.
SapphireSun
It's less than one, but close enough to round up. Are you wanting to modulo it so that it's less than {the largest float which rounds up to give 1.0}? Why is your app choking on floats-close-to-1 elsewhere?
Anon.
The problem is that it really should be overflowing to something close to zero rather than one. I can't round it unless I can identify it. Making an exception for line 189 of a bitmap seems really really flakey (especially if I resize it in the future).
SapphireSun
Why should it really be overflowing? It's *less than* 1. Why is the float being close enough to 1 to round up when displayed at the default precision causing your problems?
Anon.
The floating point errors accumulated such that it's less than one, but by rights it should be slightly above or equal to one. Having one row set to maximum when it should be zero is a big problem.
SapphireSun
I fixed it by adding an epsilon = 10*-10 before I took the modulo. Thank you everyone!
SapphireSun
+6  A: 

Print doesn't show the full precision of the number as stored, you can use repr() to do that

>>> last=72.99999999999999
>>> print last, 1, type(last), last % 1, last - int(last)
73.0 1 <type 'float'> 1.0 1.0
>>> print last % 1 == 1
False
>>> print repr(last), 1, type(last), repr(last%1), repr(last - int(last))
72.999999999999986 1 <type 'float'> 0.99999999999998579 0.99999999999998579
>>> 
gnibbler
A: 

If you need arbitrary precision, there are some projects out there that do just that. gmpy handles multi-precision integers, mpmath which looks quite good and bigfloat which wraps MPFR. What you have might be enough via gnibbler's answer, but, just in case.

Ninefingers
A: 

You could try the math.fmod function instead of last % 1, maybe it is better suited for you problem. Or you could reformulate you problem in integer space.

Anyway, it is not good practice to compare float values using equality == operator, due to imprecise results even from seemigly trivial operations like 0.1 + 0.2 == 0.3

Ber
+1  A: 

You should use math.fmod(x, y). Here's an excerpt from http://docs.python.org/library/math.html:

"Note that the Python expression x % y may not return the same result. The intent of the C standard is that fmod(x, y) be exactly (mathematically; to infinite precision) equal to x - n*y for some integer n such that the result has the same sign as x and magnitude less than abs(y). Python’s x % y returns a result with the sign of y instead, and may not be exactly computable for float arguments. For example, fmod(-1e-100, 1e100) is -1e-100, but the result of Python’s -1e-100 % 1e100 is 1e100-1e-100, which cannot be represented exactly as a float, and rounds to the surprising 1e100. For this reason, function fmod() is generally preferred when working with floats, while Python’s x % y is preferred when working with integers."

Rick Regan
+1 for the general point. In this case, though, I don't think it matters: `x % y` will give identical results to `fmod(x, y)` if both `x` and `y` are finite (I make no claims about infinities and nans!) and positive. (Or, indeed, if `x` and `y` are both negative.) `x // y` will also give exact results in that case, provided only that the quotient doesn't exceed `2**53`.
Mark Dickinson
Ignore the bit about `x // y` above: it's not true. Python computes `x // y` internally as `(x - x % y) / y`, rounded to the nearest integer, and the result of `(x - x % y) / y` can be in error by as much as 1.5 ulps. So the result of `x // y` can only be guaranteed exact for quotients up to about `2**51`.
Mark Dickinson
Yes, of course you're right Mark (thanks for being generous). (I think I misread the question anyhow, not that it makes my answer any more correct. I see now it's a classic "floating-point display vs internal" issue.)
Rick Regan