tags:

views:

188

answers:

7

Here is my test code for some maths I am doing:

Why is C# treating this differently?

EXCEL

Numerator = =-0.161361101510599*10000000 Denominator = =(-1*(100-81.26)) * (10000000/100)

+4  A: 

Your problem is that you are feeding Excel different initial values to your C# code. How do you expect them to be the same?

IOW: 0.161361101510599 != 0.161361102

leppie
Wow. I missed that one!
xan
Ok - With the same number I now get 0.8610517689999946638207043757M and not 0.861051769000
ChloeRadshaw
In this context IOW means "in other words", not "Isle of Wight" or "I Own Wool"
bobobobo
@CholeRadshaw notice the other answer, when you round the longer one to the length of the shorter one, they equal, you're dealing with varied precision and floating point problems.
DevelopingChris
@ChloeRadshaw: Now those are the same, just printed differently, Excel may have a level of precision than .NET `double`.
leppie
+4  A: 

This is most likely due to how the numbers are represented in excel vs. C#. Rounding errors / differences are common when doing artihmatic to a high degree of accuracy on different platofrms or using different software.

EDIT: It could of course be due to different numbers being fed in in the first place. Go me looking for the complex answer!

Programmers are notorious at missing the big picture and overlooking the obvious (well - I am...) Case in point!

xan
Hehe +1 for honesty :)
leppie
Well, it's widely known that writing algorithm error analysis software is easier than comparing numbers by hand. We are all programmers here. Number crunching is for computer operators. (just to be safe: this was meant to be sarcastic :)
zendar
+4  A: 

Following a comment of you on another answer you say you are obtaining the result (0.8610517689999946638207043757m). If you round it like so:

Math.Round(0.8610517689999946638207043757m, 12);

It will output: 0,861051769000

João Angelo
+2  A: 

Hi

Not many of us who work on numeric computing trust Excel to add 2 1 digit numbers correctly. I ran your calculation in Mathematica, giving each fractional number 64 digits by extending them to the right with 0s. This is the result:

0.861051771611526147278548559231590181430096051227321237993596585

In this case go with C# rather than Excel. And, on 2nd and 3rd thoughts, in every case go with C# rather than with Excel, whose inadequacies for numeric computing are widely known and well documented.

Regards

Mark

High Performance Mark
Excel normally has quite a high level of precision, IIRC it is better than `double`, maybe 96/128-bit?
leppie
The real problem with Excel, from a numeric p-o-v, is that it doesn't seem to implement IEEE arithmetic nor any reasonable superset of it. It seems designed to hide the nastiness of trying to do real (real-number and genuine) arithmetic on a computer. This might suit it's target audience very well, but for proper number crunching it can be frustratingly difficult to get a straight answer out of it.
High Performance Mark
+1  A: 

Excel store 15 significant digits of precision. Read the article "Why does Excel Give Me Seemingly Wrong Answers?".

gd047
+1  A: 

On another note; you should probably simplify your arithmetic a bit to reduce the number of operations. Generally, (but not always) fewer operations gives a better result in f.p. arithmetic.

val = noiseTerm / (81.26/100 - 1) 

is mathematically equivalent to your equation and contains 3 operations as opposed to your 8. In particular, the scalingFactor divides out completely, so it is not necessary at all.

chris
+1  A: 

First of all, in most cases Excel uses double precision floating point arithmetic for basic operations just as C# does.

As for your specific case, your C# code does not match your Excel formulas. Try this C# code - which uses your Excel formulas:

    static void Calc()
    {
        double numerator = -0.161361101510599 * 10000000.0;
        double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
        double result = numerator / denominator;
        Console.WriteLine("result={0}", result);
    }

Run this and note that the output is 0.861051768999995.

Now, format the result in Excel with the custom number format "0.00000000000000000" and you will see that Excel is giving you the same result as C#. By default, Excel uses the "General" format which rounds this number to ~12 significant digits of precision. By changing to the format above, you force Excel to show 15 digits of precision - which is the maximum number of significant digits Excel will use to display a number (internally, they have 15+ digits of precision just as the C# double type does).

You can force C# to display 15+ significant digits (instead of rounding to 15 significant digits) by running the following code:

    static void Calc()
    {
        double numerator = -0.161361101510599 * 10000000.0;
        double denominator = (-1.0 * (100.0 - 81.26)) * (10000000.0 / 100.0);
        double result = numerator / denominator;
        Console.WriteLine("result={0:R}", result);
    }

This code will output 0.8610517689999948...but there is no way AFAIK to get Excel to display 15+ digits.

Joe Erickson