views:

447

answers:

10

My application is generating different floating point values when I compile it in release mode and in debug mode. The only reason that I found out is I save a binary trace log and the one from the release build is ever so slightly off from the debug build, it looks like the bottom two bits of the 32 bit float values are different about 1/2 of the cases.

Would you consider this "difference" to be a bug or would this type of difference be expected. Would this be a compiler bug or an internal library bug.

For example:

LEFTPOS and SPACING are defined floating point values.
float def_x;
int xpos;

def_x = LEFTPOS + (xpos * (SPACING / 2));

The issue is in regards to the X360 compiler.

+2  A: 

It's not a bug. Any floating point uperation has a certain imprecision. In Release mode, optimization will change the order of the operations and you'll get a slightly different result. The difference should be small, though. If it's big you might have other problems.

Branan
+7  A: 

Release mode may have a different FP strategy set. There are different floating point arithmetic modes depending on the level of optimization you'd like. MSVC, for example, has strict, fast, and precise modes.

Nick
A: 

Does your compiler set uninitialised values differently according to build type?

Meff
+3  A: 

I helped a co-worker find a compiler switch that was different in release vs. debug builds that was causing his differences.

Take a look at /fp (Specify Floating-Point Behavior).

crashmstr
+1  A: 

Not a bug. This type of difference is to be expected.

For example, some platforms have float registers that use more bits than are stored in memory, so keeping a value in the register can yield a slightly different result compared to storing to memory and re-loading from memory.

Jamie
A: 

This discrepancy may very well be caused by the compiler optimization, which is typically done in the release mode, but not in debug mode. For example, the compiler may reorder some of the operations to speed up execution, which can conceivably cause a slight difference in the floating point result.

So, I would say most likely it is not a bug. If you are really worried about this, try turning on optimization in the Debug mode.

Dima
+3  A: 

I know that on PC, floating point registers are 80 bits wide. So if a calculation is done entirely within the FPU, you get the benefit of 80 bits of precision. On the other hand, if an intermediate result is moved out into a normal register and back, it gets truncated to 32 bits, which gives different results.

Now consider that a release build will have optimisations which keep intermediate results in FPU registers, whereas a debug build will probably naively copy intermediate results back and forward between memory and registers - and there you have your difference in behaviour.

I don't know whether this happens on X360 too or not.

Luke Halliwell
A: 

Like others mentioned, floating point registers have higher precision than floats, so the accuracy of the final result depends on the register allocation.

If you need consistent results, you can make the variables volatile, which will result in slower, less precise, but consistent results.

tfinniga
+1  A: 

In addition to the different floating-point modes others have pointed out, SSE or similiar vector optimizations may be turned on for release. Converting floating-point arithmetic from standard registers to vector registers can have an effect on the lower bits of your results, as the vector registers will generally be more narrow (fewer bits) than the standard floating-point registers.

Derek Park
A: 

If you set a compiler switch that allowed the compiler to reorder floating-point operations, -- e.g. /fp:fast -- then obviously it's not a bug.

If you didn't set any such switch, then it's a bug -- the C and C++ standards don't allow the compilers to reorder operations without your permission.

Die in Sente