views:

475

answers:

5

Hi,

After searching a long time for a performance bug, I read about denormal floating point values. I have an Intel Core 2 Duo and I am compiling with gcc, using "-O2".

So what do I do? Can I somehow instruct g++ to avoid denormal values? If not, can I somehow test if a float is denormal?

Thanks! Nathan

A: 

You apparently want some CPU instructions called FTZ (Flush To Zero) and DAZ (Denormals Are Zero).

I found the information on an audio web site but their link to the Intel documentation was missing. They are apparently SSE2 instructions so they should work on AMD CPUs that support that.

I don't know what you can do in GCC to force that on in a portable way. You can always write inline assembly code to use them though. You may have to force GCC to use only SSE2 for floating point math.

Zan Lynx
+6  A: 

Wait. Before you do anything, do you actually know that your code is encountering denormal values, and that they're having a measurable performance impact?

Assuming you know that, do you know if the algorithm(s) that you're using is stable if denormal support is turned off? Getting the wrong answer 10x faster is not usually a good performance optimization.

Those issues aside:

  • If you want to detect denormal values to confirm that their presence, you have a few options. If you have a C99 standard library or Boost, you can use the fpclassify macro. Alternatively, you can compare the absolute values of your data to the smallest positive normal number.

  • You can set the hardware to flush denormal values to zero (FTZ), or treat denormal inputs as zero (DAZ). The easiest way, if it is properly supported on your platform, is probably to use the fesetenv( ) function in the C header fenv.h. However, this is one of the least-widely supported features of the C standard, and is inherently platform specific anyway. You may want to just use some inline assembly to directly set the FPU state to (DAZ/FTZ).

Stephen Canon
+1  A: 

You can test whether a float is denormal using

#include <limits>

if ( flt != 0 && fabsf( flt ) < numeric_limits<float>::min() ) {
    // it's denormalized
}

What you want is a sample-based analyzer like Shark, VTune, or Zoom. Micro-optimization, even more than other optimizations, is totally hopeless without analysis both before and after.

Potatoswatter
What do you mean, no performance penalty? Are you sure?I wrote a little test program, showing that adding floating pointswith a value of exp(-100) is 10 times slower than when the value is 0.1.Am I completely wrong here?
Nathan
@Nathan: No, you would be right then. Sorry.
Potatoswatter
@Nathan: only in that the penalty is actually substantially more than 10x =)
Stephen Canon
+1  A: 

Most math coprocessors have an option to truncate denormal values to zero. On x86 it is the FZ (Flush to Zero) flag in the MXCSR control register. Check your CRT implementation for a support function to set the control register. It ought to be in <float.h>, something resembling _controlfp(). The option bit usually has "FLUSH" in the #defined symbol.

Double-check your math results after you set this. Which is something you ought to do anyway, getting denormals is a sign of health problems.

Hans Passant
+2  A: 

Just as an addition to the other answers, if you actually have a problem with denormal floating point values you probably have a precision problem in addition to your performance issue.

It may be a good idea to check if you can restructure your computations to keep the numbers larger to avoid losing precision and performance.

Laserallan