views:

128

answers:

3

Related to my other question, I have now modified the sparse matrix solver to use the SOR (Successive Over-Relaxation) method. The code is now as follows:

void SORSolver::step() {
    float const omega = 1.0f;
    float const
        *b = &d_b(1, 1),
        *w = &d_w(1, 1), *e = &d_e(1, 1), *s = &d_s(1, 1), *n = &d_n(1, 1),
        *xw = &d_x(0, 1), *xe = &d_x(2, 1), *xs = &d_x(1, 0), *xn = &d_x(1, 2);
    float *xc = &d_x(1, 1);
    for (size_t y = 1; y < d_ny - 1; ++y) {
        for (size_t x = 1; x < d_nx - 1; ++x) {
            float diff = *b
                - *xc
                - *e * *xe
                - *s * *xs - *n * *xn
                - *w * *xw;
            *xc += omega * diff;
            ++b;
            ++w; ++e; ++s; ++n;
            ++xw; ++xe; ++xs; ++xn;
            ++xc;
        }
        b += 2;
        w += 2; e += 2; s += 2; n += 2;
        xw += 2; xe += 2; xs += 2; xn += 2;
        xc += 2;
    }
}

Now the weird thing is: if I increase omega (the relaxation factor), the execution speed starts to depend dramatically on the values inside the various arrays!

For omega = 1.0f, the execution time is more or less constant. For omega = 1.8, the first time, it will typically take, say, 5 milliseconds to execute this step() 10 times, but this will gradually increase to nearly 100 ms during the simulation. If I set omega = 1.0001f, I see an accordingly slight increase in execution time; the higher omega goes, the faster execution time will increase during the simulation.

Since all this is embedded inside the fluid solver, it's hard to come up with a standalone example. But I have saved the initial state and rerun the solver on that state every time step, as well as solving for the actual time step. For the initial state it was fast, for the subsequent time steps incrementally slower. Since all else is equal, that proves that the execution speed of this code is dependent on the values in those six arrays.

This is reproducible on Ubuntu with g++, as well as on 64-bit Windows 7 when compiling for 32-bits with VS2008.

I heard that NaN and Inf values can be slower for floating point calculations, but no NaNs or Infs are present. Is it possible that the speed of float computations otherwise depends on the values of the input numbers?

A: 

Does the sparsity of the matrix at each iteration remain the same ?

High Performance Mark
Yes. (Also, that is not an answer.) (I only said that to pad to 15.)
Thomas
+2  A: 

The short answer to your last question is "yes" - denormalized (very close to zero) numbers require special handling and can be much slower. My guess is that they're creeping into the simulation as time goes on. See this related SO post: http://stackoverflow.com/questions/2051534/floating-point-math-execution-time

Setting the floating-point control to flush denormals to zero should take care of things with a negligible imapct on the simulation quality.

celion
Denormalized values are indeed getting into it; I'm solving for the fluid pressure to get the divergence (`d_b` above) as close to zero as possible. So this explains it, and you once more have taught me something. Many thanks! (Your confusion about `1-omega` is probably because the matrix rows are scaled so there are `1`s on the diagonal. The `- *xc` does not come from there, but is actually the term `-omega * *xc` in your line of code. I'm getting reasonable answers.)
Thomas
Glad that did the trick. I heard a horror story from a coworker recently - right before a big trade show a few years ago, the demo program was slowing to a crawl after running for a few minutes. It turned out a bug in the graphics driver wasn't resetting the floating-point control flag properly, and they were getting denormals from repeated multiplication of a damping factor in the rigid body simulation. Simple enough to fix once they knew the problem, but took them a day or two of debugging to track it down. Anyway, I immediately thought of that when I read your description.
celion
I also removed my (incorrect) edits about 1-omega to make the answer clearer...
celion
+1, wow you really know your stuff!
Eamon Nerbonne
A: 

celion's answer turns out to be the correct one. The post he links to says to turn on flush-to-zero of denormalized values:

#include <float.h>
_controlfp(_MCW_DN, _DN_FLUSH);

However, this is Windows-only. Using gcc on Linux, I did the same thing with a whiff of inline assembly:

int mxcsr;
__asm__("stmxcsr %0" : "=m"(mxcsr) : :);
mxcsr |= (1 << 15); // set bit 15: flush-to-zero mode
__asm__("ldmxcsr %0" : : "m"(mxcsr) :);

This is my first x86 assembly ever, so it can probably be improved. But it does the trick for me.

Thomas
You can set the FP environment portably with fenv.h (C99).
Jed
This is C++, so that won't work directly. But it might be possible to steal that implementation. Thanks!
Thomas