As programmers, most (if not all of us) know that floating point numbers have a tendency to not be very accurate. I know that this problem can't be avoided entirely, but I am wondering if there are any certain practices, patterns, etc. that can be used to at least reduce floating point errors.
Thanks in advance.