tags:

views:

203

answers:

4

Case One

Say you have a little class:

class Point3D
{
private:
  float x,y,z;
public:
  operator+=()

  ...etc
};

Point3D &Point3D::operator+=(Point3D &other)
{
  this->x += other.x;
  this->y += other.y;
  this->z += other.z;
}

A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster?

Case Two

Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats:

float coordinateData[NUM_POINTS*3];

void add(int i,int j) //yes it's unsafe, no overlap check... example only
{
  for (int x=0;x<3;++x)
  {
    coordinateData[i*3+x] += coordinateData[j*3+x];
  }
}

What about use of SSE here? Any better?

In conclusion

Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

+1  A: 

it is valuable if your is case is that you do a lot of same calculations on range of data. for example you calculate square roots of many-many equations. you can load 4 values in sse registers and call operations once. this will increase performance by 4.

and there are libraries that have all sse optimization inside them. don't reinvent bicycle.

Andrey
+3  A: 

In general you will need to take additional steps to get the best out of SSE (or any other SIMD architecture):

  • data needs to be 16 byte aligned (ideally)

  • data needs to be contiguous

  • you need enough data to make the SIMD operation worthwhile

  • you need to coalesce as many operations as you can to mitigate the costs of loads/stores

  • you need to be aware of the cache/memory hierarchy and its performance impact (e.g. use strip-mining/tiling)

Paul R
If we align data structures, then we don't need to load values to registers? Or we still do, and this just speeds that part up?
John
Your data needs to be 16-byte aligned in order to get the most efficient loads/stores between memory and SSE registers - SSE does support misaligned loads/stores but there is a significant performance penalty for using these on anything other than Core i7.
Paul R
A: 

I tried Case One at work a couple of years ago and the performance gain was barely measurable. In the end I decided to skip it since all the hassle with aligning all Point3D on 16 byte boundaries made it not worthwhile.

As you've correctly guessed SSE is most suited to bulk operations where they can give a pretty good speed up. Before you go ahead and use the SSE intrinsics check what code the compiler is already generating. I know from experience that for instance Visual Studio is pretty good at using SSE-optimizations.

Andreas Brinck
If you want help from the compiler then Intel's ICC will do a lot more auto-vectorization than Visual Studio.
Paul R
A: 

This Gamasutra article shows what it takes to make fast SSE-based code. It covers your "Case 1" in detail.

The source code is available from the author's homepage.

nsanders