+13  A: 

It is floating precision problem.

Second statement works because the compiler counts the expression 1e-3 * 1e3 before emitting the .exe.

Look it up in ILDasm/Reflector, it will emit something like

 if (1.0 < 1.0)
                Console.WriteLine("Wrong");
Yossarian
Strange. The second statement gets completely optimized out by the compiler for me. I am not convinced yet that this is a floating point precision issue anyway. See my answer.
Brian Gideon
+1. Agreed I can't reproduce the problem no "Wrongs" are written to the console.
AnthonyWJones
VS even indicates that the content of the second if is unreachable.
AnthonyWJones
+2  A: 

See the answers here

AnthonyWJones
+2  A: 

Umm...strange. I am not able to reproduce your problem. I am using C# 3.5 and Visual Studio 2008 as well. I have typed in your example exactly as it was posted and I am not seeing either Console.WriteLine statement execute.

Also, the second if statement is getting optimized out by the compiler. When I examine both the debug and release builds in ILDASM/Reflector I see no evidence of it. That makes since because I get a compiler warning saying unreachable code was detected on it.

Finally, I do not see how this could be a floating point precision issue anyway. Why would the C# compiler statically evaluate two doubles differently than the CLR would at runtime? If that were really the case then one could make the argument that the C# compiler has a bug.

Edit: After giving this a little more thought I am even more convinced that this is not a floating point precision issue. You must have either stumbled across a bug in the compiler or the debugger or the code you posted is not exactly representative of your actual code that is running. I am highly skeptical of a bug in the compiler, but a bug in the debugger seems more likely. Try rebuilding the project and running it again. Maybe the debugging information compiled along with exe got out of sync or something.

Brian Gideon
Thanks very much Brian for the input; my colleague down the hall sees the same thing on his machine. Windows XP on Dell Optplex 320, Pentium D 3.4 GHz.
Richard Morgan
What version of VS do you have? The about dialog box says 9.0.30729.1 for me. I am also using .NET Framework 3.5 with SP 1. Also, double check that you are targeting framework 3.5 when building the application. It seems that something unusual is certainly happening.
Brian Gideon
+4  A: 

The problem here is quite subtle. The C# compiler doesn't (always) emit code that does the computation in double, even when that's the type that you've specified. In particular, it emits code that does the computation in "extended" precision using x87 instructions, without rounding intermediate results to double.

Depending on whether 1e-3 is evaluated as a double or long double, and whether the multiplication is computed in double or long double, it is possible to get any of the following three results:

  • (long double)1e-3 * 1e3 computed in long double is 1.0 - epsilon
  • (double)1e-3 * 1e3 computed in double is exactly 1.0
  • (double)1e-3 * 1e3 computed in long double is 1.0 + epsilon

Clearly, the first comparison, the one that is failing to meet your expectations is being evaluated in the manner described in the third scenario I listed. 1e-3 is being rounded to double either because you are storing it and loading it again, which forces the rounding, or because C# recognizes 1e-3 as a double-precision literal and treats it that way. The multiplication is being evaluated in long double because C# has a brain-dead numerics model that's the way the compiler is generating the code.

The multiplication in the second comparison is either being evaluated using one of the other two methods, (You can figure out which by trying "1 > 1e-3 * 1e3"), or the compiler is rounding the result of the multiplication before comparing it with 1.0 when it evaluates the expression at compile time.

It is likely possible for you to tell the compiler not to use extended precision without you telling it to via some build setting; enabling codegen to SSE2 may also work.

Stephen Canon
+1 for a full explanation of the precision problem
Richard Dunlap
Interesting perspective. I do not see any compilation parameters that pertain to this issue nor did I discover any that I could change that enabled me to reproduce the problem. If this really is the issue then it would seem to lie in the JIT compiler as that is what is emitting machine instructions. But, if that is the case then surely it is a bug. Afterall, having a program whose execution is nondeterministric by design is terribly disturbing.
Brian Gideon
It's (sadly) one of the allowed evaluation modes for floating-point per the C99 standard. I don't know if the C# spec pins the semantics down more carefully, but I would be surprised. On a x86 machine that doesn't implement the SSE2 instructions, its the only performant way to do double-precision computations (the alternative is to store the results after every step of the computation, which absolutely destroys performance). I don't know if the OP has such a machine, or if C# supports such machines, but it's certainly one possibility
Stephen Canon
Yup, it's allowed in C# as well; MS's reference says: "Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type."
Stephen Canon
Very interesting indeed! This is certainly a good lead. I have to admit, though, if this turns out to be the cause of the OP's sample executing differently on different CPU architectures then I am going to be shocked. But, I have seen stranger things so who knows!
Brian Gideon
+1. Excellent answer. This might explain why others have been unable to reproduce the problem.
AnthonyWJones