views:

24

answers:

1

Hey all,

I am using Compute Prof 3.2 and a Geforce GTX 280. I have compute capability 1.3 then I believe.

This file, http://developer.download.nvidia.com/compute/cuda/3_0/toolkit/docs/visual_profiler_cuda/CUDA_Profiler_3.0.txt, seems to show that I should be able to see these fields since I am using a 1.x compute device. Well I don't see them and the User Guide for 3.2 toolkit says I can't see them, but calls them gst_uncoalesced and gst_coalesced.

To sum up, I am confused about how I should figure out from the profiler if I am making non-coalesced reads from global memory. It doesn't look like Fermi cards will say either, but I am not worried about them for now. If anybody can elaborate on the situation I would appreciate it.

Also, I've been told to look at the assembly of my kernels to figure this stuff out, so any elaboration on how to do this is appreciated too. I am just starting to try and figure that stuff out too :)

+1  A: 

I had similar problems with the profiling output. While on a 8600 (compute capability 1.0) it showed both coalesced and uncoalesced reads/writes, it showed only coalesced on GTX280. I assumed that was due to the better coalescing on the gtx 280 making the cut less clear (is a memory read for which all but one word is not needed uncoalesced?). However you can just look into the summary table. There you find a load and a store efficieny for each kernel. If all accesses are coalesced that efficiency should be 1, otherwise its less then one (0.5 meaning that only half of the loaded bytes are used).

Of course since that doesn't help you much figuring out where exactly your uncoalesced accesses are inside your kernel, the best way is still knowing how the coalescing works (addresses of each halfwarp are gathered into 32, 64 and 128byte accesses, not accessed values inside that area are transferred anyways) and analysing your accesspatterns is still the way to go in the end.

Grizzly
Thanks for the response. I think you must be right about gld_efficiency and gst_efficiency. I am still looking for concrete examples on what kind of CUDA or OpenCL code generates non-coalesced reads/writes. Same for bank conflicts. Nvidia's docs show nice diagrams, but not the code which would go along with them. Are there any concrete examples out there?
@user464095: The NVIDIA OpenCL Best Practices Guide has some examples (Matrixmultiply in several variants with massively different memory performance). I would post some of my code with the corresponding reasoning, but since most of it is work related I can't share it as it. So maybe later
Grizzly