views:

32

answers:

1

We're currently using System.Decimals to represent numbers in a .NET application we're developing. I know that decimals are design to minimize errors due to rounding, but I also know that certain numbers, 1/3 for example, cannot be represented as a decimal so some calculations will have small rounding error. I believe the magnitude of this error will be very small and insignificant, however a colleague disagrees. I would therefore like to be able to estimate the order of magnitude of the error due to rounding in our app. Say, for example, we are calculating a running total of “deals” and will do about 10,000 “deals” per day and there are about 5-10 decimal operations (add, sub, div, mul etc.) to calculate the running total new running total for each deal received, what would be the order of magnate of round error? An answer with a procedure for calculating this would also be nice, so I can learn how to do this for myself in the future.

+2  A: 

What Every Computer Scientist Should Know About Floating-Point Arithmetic goes into detail on estimating the error in the result of a sequence of floating point operations, given the precision of the floating point type. I haven't tried this on any practical program, though, so I'd be interested to know if it's feasible.

Tim Robinson
Would have been nice if someone could have translated it into words of one syllable for me, but I guess this is the correct answer.
Robert