The problem here is quite subtle. The C# compiler doesn't (always) emit code that does the computation in double, even when that's the type that you've specified. In particular, it emits code that does the computation in "extended" precision using x87 instructions, without rounding intermediate results to double.
Depending on whether 1e-3 is evaluated as a double or long double, and whether the multiplication is computed in double or long double, it is possible to get any of the following three results:
- (long double)1e-3 * 1e3 computed in long double is 1.0 - epsilon
- (double)1e-3 * 1e3 computed in double is exactly 1.0
- (double)1e-3 * 1e3 computed in long double is 1.0 + epsilon
Clearly, the first comparison, the one that is failing to meet your expectations is being evaluated in the manner described in the third scenario I listed. 1e-3 is being rounded to double either because you are storing it and loading it again, which forces the rounding, or because C# recognizes 1e-3 as a double-precision literal and treats it that way. The multiplication is being evaluated in long double because C# has a brain-dead numerics model that's the way the compiler is generating the code.
The multiplication in the second comparison is either being evaluated using one of the other two methods, (You can figure out which by trying "1 > 1e-3 * 1e3"), or the compiler is rounding the result of the multiplication before comparing it with 1.0 when it evaluates the expression at compile time.
It is likely possible for you to tell the compiler not to use extended precision without you telling it to via some build setting; enabling codegen to SSE2 may also work.