views:

417

answers:

6

I need some idea how to write a C++ cross platform implementation of a few parallelizable problems in a way so I can take advantage of SIMD (SSE, SPU, etc) if available. As well as I want to be able at run time to switch between SIMD and not SIMD.

How would you suggest me to approach this problem? (Of course I don't want to implement the problem multiple times for all possible options)

I can see how this might not be very easy task with C++ but I believe that I'm missing something. So far my idea looks like this... A class cStream will be array of a single field. Using multiple cStreams I can achieve SoA (Structure of Arrays). Then using a few Functors I can fake Lambda function that I need to be executed over the whole cStream.

// just for example I'm not expecting this code to compile
cStream a; // something like float[1024]
cStream b;
cStream c;

void Foo()
{
    for_each(
        AssignSIMD(c, MulSIMD(AddSIMD(a, b), a)));
}

Where for_each will be responsible for incrementing the current pointer of the streams as well as inlining the functors' body with SIMD and without SIMD.

something like so:

// just for example I'm not expecting this code to compile
for_each(functor<T> f)
{
#ifdef USE_SIMD
    if (simdEnabled)
        real_for_each(f<true>()); // true means use SIMD
    else
#endif
        real_for_each(f<false>());
}

Notice that if the SIMD is enabled is checked once and that the loop is around the main functor.

+1  A: 

Notice that the given example decides what to execute at compile time (since you're using the preprocessor), in this case you can use more complex techniques to decide what you actually want to execute; For example, Tag Dispatch: http://cplusplus.co.il/2010/01/03/tag-dispatching/ Following the example shown there, you could have the fast implementation be with SIMD, and the slow without.

rmn
There are 3 problems. 1. Makes decision during compile time2. Requires multiple implementations3. Desiosion is based on the input data, where I want the same data to be used in SIMD and non SIMD
Aleks
+2  A: 

You might want to look at the source for the MacSTL library for some ideas in this area: www.pixelglow.com/macstl/

Paul R
There are a lot of templates in MacSTL. I'll need time to figure out how it is implemented. But while reading about it I come up with idea which seems to work. I'll post some dirty code as a new answer if someone else is curious...
Aleks
I'd be interested to see any code that you come up with. I already have a partial solution for this kind of problem but unfortunately it's proprietary (the IP belongs to my employer).
Paul R
+2  A: 

If someone is interested this is the dirty code I come with to test a new idea that I came with while reading about the library that Paul posted.

Thanks Paul!

// This is just a conceptual test
// I haven't profile the code and I haven't verified if the result is correct
#include <xmmintrin.h>


// This class is doing all the math
template <bool SIMD>
class cStreamF32
{
private:
    void*       m_data;
    void*       m_dataEnd;
    __m128*     m_current128;
    float*      m_current32;

public:
    cStreamF32(int size)
    {
        if (SIMD)
            m_data = _mm_malloc(sizeof(float) * size, 16);
        else
            m_data = new float[size];
    }
    ~cStreamF32()
    {
        if (SIMD)
            _mm_free(m_data);
        else
            delete[] (float*)m_data;
    }

    inline void Begin()
    {
        if (SIMD)
            m_current128 = (__m128*)m_data;
        else
            m_current32 = (float*)m_data;
    }

    inline bool Next()
    {
        if (SIMD)
        {
            m_current128++;
            return m_current128 < m_dataEnd;
        }
        else
        {
            m_current32++;
            return m_current32 < m_dataEnd;
        }
    }

    inline void operator=(const __m128 x)
    {
        *m_current128 = x;
    }
    inline void operator=(const float x)
    {
        *m_current32 = x;
    }

    inline __m128 operator+(const cStreamF32<true>& x)
    {
        return _mm_add_ss(*m_current128, *x.m_current128);
    }
    inline float operator+(const cStreamF32<false>& x)
    {
        return *m_current32 + *x.m_current32;
    }

    inline __m128 operator+(const __m128 x)
    {
        return _mm_add_ss(*m_current128, x);
    }
    inline float operator+(const float x)
    {
        return *m_current32 + x;
    }

    inline __m128 operator*(const cStreamF32<true>& x)
    {
        return _mm_mul_ss(*m_current128, *x.m_current128);
    }
    inline float operator*(const cStreamF32<false>& x)
    {
        return *m_current32 * *x.m_current32;
    }

    inline __m128 operator*(const __m128 x)
    {
        return _mm_mul_ss(*m_current128, x);
    }
    inline float operator*(const float x)
    {
        return *m_current32 * x;
    }
};

// Executes both functors
template<class T1, class T2>
void Execute(T1& functor1, T2& functor2)
{
    functor1.Begin();
    do
    {
        functor1.Exec();
    }
    while (functor1.Next());

    functor2.Begin();
    do
    {
        functor2.Exec();
    }
    while (functor2.Next());
}

// This is the implementation of the problem
template <bool SIMD>
class cTestFunctor
{
private:
    cStreamF32<SIMD> a;
    cStreamF32<SIMD> b;
    cStreamF32<SIMD> c;

public:
    cTestFunctor() : a(1024), b(1024), c(1024) { }

    inline void Exec()
    {
        c = a + b * a;
    }

    inline void Begin()
    {
        a.Begin();
        b.Begin();
        c.Begin();
    }

    inline bool Next()
    {
        a.Next();
        b.Next();
        return c.Next();
    }
};


int main (int argc, char * const argv[]) 
{
    cTestFunctor<true> functor1;
    cTestFunctor<false> functor2;

    Execute(functor1, functor2);

    return 0;
}
Aleks
+2  A: 

You might want to take a glance at my attempt at SIMD/non-SIMD:

  • vrep, a templated base class with specializations for SIMD (note how it distinguishes between floats-only SSE, and SSE2, which introduced integer vectors.).

  • More useful v4f, v4i etc classes (subclassed via intermediate v4).

Of course it's far more geared towards 4-element vectors for rgba/xyz type calculations than SoA, so will completely run out of steam when 8-way AVX comes along, but the general principles might be useful.

timday
It's interesting approach I really need SoA in my case. But might try to do template specialization as well.
Aleks
A: 

The most impressive approach to SIMD-scaling I've seen is the RTFact ray-tracing framework: slides, paper. Well worth a look. The researchers are closely associated with Intel (Saarbrucken now hosts the Intel Visual Computing Institute) so you can be sure forward scaling onto AVX and Larrabee was on their minds.

Intel's Ct "data parallelism" template library looks quite promising too.

timday
A: 

Have you thought about using existing solutions like liboil? It implements lots of common SIMD operations and can decide at runtime whether to use SIMD/non-SIMD code (using function pointers assigned by an initialization function).

AndiDog
I still need to check how this libraries is using function pointers to switch between SIMD/non-SIMD, but I don't see how these function pointers will be inlined. The other problem I noticed is that all the arithmetic functions are implemented with their own loop. for(i=0;i<(ni+=4){ ... } for(;i<n;i++){ ... }And for something like A + B + C it needs to loop through all elements twice
Aleks