I'm a fairly green programmer, and I'm learning Python right now. I'm up to chapter 17 in "Learn to Think Like a Computer Scientist" (Classes and Methods), and I just wrote my first doctest that failed in a way I truly do not fully understand:
class Point(object):
'''
represents a point object.
attributes: x, y
'''
def ___init___(self, x = 0, y = 0):
'''
>>> point = Point()
>>> point.y
0
>>> point = Point(4.7, 8.2)
>>> point.x
4.7
'''
self.x = x
self.y = y
The second doctest for __init__
fails, and returns 4.7000000000000002 instead of 4.7. However, if I rewrite the doctest with a "print" statement as so:
>>> point = Point(4.7, 8.2)
>>> print point.x
4.7
It runs correctly.
So I read up on how Python stores floats, and I now understand that, due to binary representation of decimal numbers, the reason for the discrepancy is that Python stores 4.7 as a string of 1s and 0s that almost but don't quite equal 4.7.
But what I don't understand is why a call to "point.x" returns 4.7000000000000002 and a call to "print point.x" returns 4.7. Under what other circumstances will Python choose to round like it does with "print"? How does this rounding work? Can these trailing significant figures lead to errors in programming (aside from, obviously, failed doctests)? Can a failure to pay attention to rounding create dangerous ambiguity?
Since this has to do with binary representation of decimal numbers, I'm sure that this is in fact a general CS issue and not one specific to Python, but what I really need to know right now is what I can do, specifically as a Python programmer, to avoid any related issues and/or bug infestations.
Also, for bonus points, is there some other way that Python can store floating point numbers aside from the default activated by a line like "a = 4.7"? I know there's the Decimal package, but I'm not totally sure how it works. Honestly, all of this dynamic typing stuff confuses me sometimes.
Edit: I should specify that I'm using Python 2.6 (at some point I want to use NumPy and Biopython)