views:

194

answers:

1

Hi folks,

I've been messing around with this for a while now, but can't seem to get it right. I'm trying to copy objects that contain arrays into CUDA device memory (and back again, but I'll cross that bridge when I come to it):

struct MyData {
  float *data;
  int dataLen;
}

void copyToGPU() {
  // Create dummy objects to copy
  int N = 10;
  MyData *h_items = new MyData[N];
  for (int i=0; i<N; i++) {
    h_items[i].dataLen = 100;
    h_items[i].data = new float[100];
  }

  // Copy objects to GPU
  MyData *d_items;
  int memSize = N * sizeof(MyData);
  cudaMalloc((void**)&d_items, memSize);
  cudaMemCpy(d_items, h_items, memSize, cudaMemcpyHostToDevice);

  // Run the kernel
  MyFunc<<<100,100>>>(d_items);
}

__global__
static void MyFunc(MyData *data) {
  int idx = blockIdx.x * blockDim.x + threadIdx.x;
  for (int i=0; i<data[idx].dataLen; i++) {
    // Do something with data[idx].data[i]
  }
}

When I call MyFunc(d_items), I can access data[idx].dataLen just fine. However, data[idx].data has not been copied yet.

I can't use d_items.data in copyToGPU as a destination for cudaMalloc/cudaMemCpy operations since the host code cannot dereference a device pointer.

What to do?

A: 
  • allocate device data for all structures, as a single array.
  • Copy contiguous data from host to GPU.
  • adjust GPU pointers

example:

float *d_data;
cudaMalloc((void**)&d_data, N*100*sizeof(float));
for (...) {
    h_items[i].data = i*100 + d_data;
}
aaa