views:

162

answers:

4

Simimilar problem to http://stackoverflow.com/questions/1957801/math-atan2-or-class-instance-problem-in-c and http://stackoverflow.com/questions/1193630/add-two-double-given-wrong-result

It is something that simple lines:

public static String DegreeToRadianStr(Double degree)
    {
        Double piBy180 = (Math.PI / 180);

        Double Val = (piBy180 * degree);  // Here it is giving wrong value
    }

piBy180 * degree = -3.1415926535897931 but Val = -3.1415927410125732

alt text I really have no clue what to do .. Please ask some questions regarding this, so that I can point out where it is going wrong.

It is amazing that piBy180 is keeping correct value.

Same thing is happening with other functions too, But some how I manipulated to get correct values.

I am using MS Visual Studio's C#.net SP1.

A: 

It could just be a rounding error, but I can't be sure.

Check out this article on floating point arithmetic. It's written for fortran, but it's still useful. http://www.lahey.com/float.htm

zipcodeman
+2  A: 

It could just be a string formatting issue. Try adding the line:

Debug.Assert(Val == (piBy180 * degree), "WTF?");

after the assignment. It shouldn't fail, since they are both doubles, and arithmetic operations on them should produce the same binary value.

codekaizen
I agree, it might just be a display issue.
redjackwong
You should avoid comparing doubles with equality. It might be the case that `value != (value / constant) * constant` (for some 'value's and 'constant's due to rounding errors).
David Rodríguez - dribeas
Ok, learned new thing here.
Rahul2047
@dribeas - yea, but that's not what he's doing. He's just performing 2 identical multiplies.
codekaizen
+2  A: 

It would help if you told us the value of degree then we could try to reproduce the problem...

Three things:

  • You should expect a certain amount of error in floating point arithmetic. I wouldn't expect this much, but don't expect to see an exact value.
  • String conversions generally won't show you the exact value. I have a file - DoubleConverter.cs - with a ToExactString method which can help diagnose this sort of thing.
  • Is your app using DirectX at all? That can change the processor mode to effectively use 32 bit arithmetic even when you've got doubles in your code. That seems to me to be the most likely cause of the issue, but it's still just a guess really.

EDIT: Okay, so it sounds like I guessed right, and DirectX is probably the cause. You can pass CreateFlags.FpuPreserve to the Device constructor to avoid it doing this. That will reduce the performance of DirectX, admittedly - but that's a tradeoff you'll need to consider for yourself.

Jon Skeet
It is -180 from what I can tell :)
leppie
@leppie: That was my guess as well, and running the given code gives the right answer for -180... but it would be nice to actually know for sure.
Jon Skeet
It doesn't really have anything to do with floating point math though, since he's comparing the right side to the left side in the debugger. Floating point math has error, but it's deterministic error...
Isaac Cambron
@icambron: It absolutely has to do with floating point math. How do you know what precision is being used within the debugger? For example, there can be cases where a local variable is stored in a register to 80 bits of precision, whereas a member variable is stored with 64 bits... and goodness only knows what the debugger would do for watch evaluation. Add to that the DirectX oddities (which the debugger may not suffer from) and you've got a load of possibilities. I agree that the error is deterministic - but only when you know everything about the operation being performed.
Jon Skeet
Ok, fair enough
Isaac Cambron
Yes it is -180 degrees. I am also using DirectX for simulation in my application.
Rahul2047
@Rahul2047: I've edited my answer to tell you how to change DirectX's behavior.
Jon Skeet
+1 on the DirectX psychic debugging!
codekaizen
A: 

This doesn't seem possible.

The value of piBy180 * degree is computed almost correctly, off by less than 2 units in the 17th significant digit. The CPU has to perform a multiplication to do this, and it did that in double precision, correctly, even if someone tried to mess up the precision by abusing DirectX calls.

The value of Val is wrong already at the 8th significant digit. Val is declared as double, and it gets assigned the double value that was computed a microsecond ago, with no further computations in the way.

Is this a release build? I wonder if C# might have omitted storing the latest value into Val, the way C++ can do. What happens in a debug build?

Windows programmer
I am also using DirectX for simulation in my application. Can this be the reason for it?
Rahul2047