views:

60

answers:

3

Say you have a typical game-loop, running about 30 times a second. One particular function takes about 50% of the time and looks like a prime candidate for parallelization - say it's a big loop or there are 4 distinct and independent strands of work going on. Assume we already checked that the function itself can parallelize well in isolation to 2 -4 cores.

Is OpenMP likely to give a speed up in such a case? I'd expect that naively creating 1-3 threads each frame to split the work would not be great, but I don't really know what overhead a thread creation/destruction brings, if it's 10ms or 100. And i don't know if OMP is efficient at this kind of thing, or is only really suited to longer running pieces of code.

Thoughts?

A: 

Not much. MP = message passing. Those algorythms are optimized for high parallel cluster systems (2000 computers working on the same thing), NOT on "in one process, small fragments many times per second". Naturally this only works efficiently if the prolblem requires significant calculation.

Examples:

  • 3d rendering for movies, where a machine may calculate a frame in some minutes, you need many tens of thousand frames calculated.
TomTom
-1: While I can't categorically state that the MP in OpenMP does not stand for 'message-passing' I can categorically state that OpenMP provides the shared-memory abstraction for parallelisation and does not require the programmer to explicitly code for passing messages. It's also extremely unusual for OpenMP codes to operate well on machinery with 1000 processor cores, and very few shared memory computers with that many cores have ever been built.
High Performance Mark
-1: OpenMP is not a multi-process or multi-PC system, you're confusing it with MPI (or OpenMPI http://en.wikipedia.org/wiki/Openmpi). MP in OpenMP stands for "Multi-processing"
John
A: 

Many OpenMP implementations start up a gang of threads at program start up and only close it down at finalisation -- ie they don't do a lot of destruction/construction during execution. However, I think this is implementation dependent so you need to check your situation and documentation carefully.

No arguing from first principles on this issue -- test !

EDIT: If you find that your implementation does start and stop threads during execution, you can probably wrap the whole program in an omp parallel construct and use master clauses to ensure that the single-threaded parts of the program are not parallelised. This is probably easier if you have an implementation of OpenMP 3.0 than an implementation of the earlier specifications.

High Performance Mark
A: 

Creating and destroying threads every 1/30th of a second is probably not going to be that performant. People will say profile, but others with any significant multithreading experience will say reduce the number of system calls. In this case, it would be easier to create those threads once and figure out a way for them to execute requests from the main thread.

If that is all you are doing, you can probably just use #pragma omp task and #pragma omp taskwait.

MSN
task is an OpenMP 3.0 feature, isn't it? And according to Wikipedia it's not formalised or widely supported yet. So based on my compiler being VS2008, I think only FOR and SECTION are available.
John
You can also setup a for loop over an array of function pointers.
MSN