views:

1783

answers:

3

Almost anywhere I read about programming with CUDA there is a mention of the importance that all of the threads in a wrap do the same thing.
In my code I have a situation where I can't avoid a certain condition. It looks like this:

// some math code, calculating d1, d2
if (d1 < 0.5)
{
    buffer[x1] += 1;  // buffer is in the global memory
}
if (d2 < 0.5)
{
    buffer[x2] += 1;
}
// some more math code.

Some of the threads might enter into one for the conditions, some might enter into both and other might not enter into either.

Now in order to make all the thread get back to "doing the same thing" again after the conditions, should I synchronize them after the conditions using __syncthreads() ? Or does this somehow happens automagically?
Can two threads be not doing the same thing due to one of them being one operation behind, thus ruining it for everyone? Or is there some behind the scenes effort to get them to do the same thing again after a branch?

A: 

From section 6.1 of the CUDA Best Practices Guide:

Any flow control instruction (if, switch, do, for, while) can significantly affect the instruction throughput by causing threads of the same warp to diverge; that is, to follow different execution paths. If this happens, the different execution paths must be serialized, increasing the total number of instructions executed for this warp. When all the different execution paths have completed, the threads converge back to the same execution path.

So, you don't need to do anything special.

Edric
+10  A: 

Within a warp, no threads will "get ahead" of any others. If there is a conditional branch and it is taken by some threads in the warp but not others (a.k.a. warp "divergence"), the other threads will just idle until the branch is complete and they all "converge" back together on a common instruction. So if you only need within-warp synchronization of threads, that happens "automagically."

But different warps are not synchronized this way. So if your algorithm requires that certain operations be complete across many warps then you'll need to use explicit synchronization calls (see the CUDA Programming Guide, Section 5.4).


EDIT: reorganized the next few paragraphs to clarify some things.

There are really two different issues here: Instruction synchronization and memory visibility.

  • __syncthreads() enforces instruction synchronization and ensures memory visibility, but only within a block, not across blocks (CUDA Programming Guide, Appendix B.6). It is useful for write-then-read on shared memory, but is not appropriate for synchronizing global memory access.

  • __threadfence() ensures global memory visibility but doesn't do any instruction synchronization, so in my experience it is of limited use (but see sample code in Appendix B.5).

  • Global instruction synchronization is not possible within a kernel. If you need f() done on all threads before calling g() on any thread, split f() and g() into two different kernels and call them serially from the host.

  • If you just need to increment shared or global counters, consider using the atomic increment function atomicInc() (Appendix B.10). In the case of your code above, if x1 and x2 are not globally unique (across all threads in your grid), non-atomic increments will result in a race-condition, similar to the last paragraph of Appendix B.2.4.

Finally, keep in mind that any operations on global memory, and synchronization functions in particular (including atomics) are bad for performance.

Without knowing the problem you're solving it is hard to speculate, but perhaps you can redesign your algorithm to use shared memory instead of global memory in some places. This will reduce the need for synchronization and give you a performance boost.

Gabriel
A: 

The answer to your question is no. You don't need to do anything special. Anyway, you can fix this, instead of your code you can do something like this:

buffer[x1] += (d1 < 0.5);
buffer[x2] += (d2 < 0.5);

You should check if you can use shared memory and access global memory in a coalesced pattern. Also be sure that you DON'T want to write to the same index in more than 1 thread.

crick3r