views:

109

answers:

2

I have been reading the programming guide for CUDA and OpenCL, and I cannot figure out what a bank conflict is. They just sort of dive into how to solve the problem without elaborating on the subject itself. I tried googling for bank conflict and bank conflict computer science but I couldn't find much. Can anybody help me understand or point me to a good link? I have no preference if the help is in the context of CUDA/OpenCL or just bank conflicts in general in computer science, thanks :)

+2  A: 

The shared memory that can be accessed in parallel is divided into modules (also called banks). If two memory locations (addresses) occur in the same bank, then you get a bank conflict during which the access is done serially, losing the advantages of parallel access.

belwood
So is this related to when a half-warp wants to store or load memory? 16 threads will be trying to do a memory transaction and thus accessing the same bank with more than one thread causes serialised processing?Also, how does one make sure your not storing/loading data in the same bank?
+7  A: 

For nvidia (and amd for that matter) gpus the local memory is divided into memorybanks. Each bank can only address one dataset at a time, so if a halfwarp to load/store data from/to the same bank the access has to be serialized (this is a bank conflict9. For gt200 gpus there are 16 banks (32banks for fermi, 16 or 32 banks for AMD gpus (57xx or higher: 32, everything below: 16)), which are interleaved with a granuity of 32bit (so byte 0-3 are in bank 1, 4-7 in bank 2, ..., 64-69 in bank 1 and so on). For a better visualization it basically looks like this:

Bank    |      1      |      2      |      3      |...
Address |  0  1  2  3 |  4  5  6  7 |  8  9 10 11 |...
Address | 64 65 66 67 | 68 69 70 71 | 72 73 74 75 |...
...

So if each thread in a halfwarp accesses successive 32bit values there are no bank conflicts. An exception from this rule (every thread must access its own bank) are broadcasts: If all threads access the same address, the value is only read once and broadcasted to all threads (for GT200 it has to be all threads in the halfwarp accessing the same address, iirc fermi and AMD gpus can do this for any number of threads accessing the same value).

Grizzly
Sweet thanks for the visual and the explanation. I didn't know about broadcasts and that seems like an important bit of information :) How would I go about verifying that my loads and stores don't cause bank conflicts in shared memory? Do I have to get at the assembly code somehow or are there other ways?
since the occurence of bank conflict is somethink which will be determined at runtime (meaning the compiler doesn't know about it, after all most addresses are generated at runtime), getting the compiled version wouldn't help much. I typically do this the old fashined way, menaing I take a pen and paper and start thinking about what my code stores where. Afterall the rules governing the occurence of bank conflicts aren't that complex. Otherwise you can use the nvidia OpenCL profiler (should be bundled with the sdk, iirc). I think it has a counter for warp serializes.
Grizzly
Thanks for pointing out warp serializes. One of the readme text files that comes with the compute profiler said this,
Ack, excuse the comment above, for some reason I can't re-edit it. Anyways, I found this in the compute profiler's readme, " warp_serialize: Number of thread warps that serialize on address conflicts to either shared or constant memory." This is great that I can easily see if there are conflicts just by looking at the profiler output. How do you figure out if there are bank conflicts on pen and paper. Did you learn from any examples or tutorials?
As I said the mapping from addresses to banks is relatively simple, so it isn't that hard to figure out which accesses go to which bank and therefore if there are bank conflicts. The paper is only for more conflict access patterns, where I can't do it without.
Grizzly