In ActionScript 3 (and IIRC 2), the x and y properties on a display object are always stored as multiples of 0.05.
so something like obj.x = 66.6666
is the same as obj.x = 66.65
Most of the time, this doesn't matter. But sometimes I can end up with really slow-moving objects, eg 1 pixel per second. 1/60 (fps) = 0.017 pixels per frame. obj.x += 0.017
will never actually change the x value, since it gets rounded to the nearest 0.05.
This forces me to override the x & y properties of a DisplayObject so that they are not rounded.
I can understand rounding coordinates to the nearest integral value for rendering. With a more advanced renderer, I can even understand rounding to some fraction representable in binary (eg 0.25). But 0.05 cannot be represented exactly in binary.
So why might it be that the creators of Flash decided to round to the nearest .05? It just seems like such an arbitrary number to me.