Heya,
When I started programming in OpenCL I used the following approach for providing data to my kernels:
cl_mem buff = clCreateBuffer(cl_ctx, CL_MEM_READ_WRITE, object_size, NULL, NULL);
clEnqueueWriteBuffer(cl_queue, buff, CL_TRUE, 0, object_size, (void *) object, NULL, NULL, NULL);
This obviously required me to partition my data in chunks, ensuring that each chunk would fit into the device memory. After performing the computations, I'd read out the data with clEnqueueReadBuffer(). However, at some point I realised I could just use the following line:
cl_mem buff = clCreateBuffer(cl_ctx, CL_MEM_READ_WRITE | CL_MEM_USE_HOST_PTR, object_size, (void*) object, NULL);
When doing this, the partitioning of the data became obsolete. And to my surprise, I experienced a great boost in performance. And that is something I don't understand. From what I got, when using a host pointer, the device memory is working as a cache, but all the data still needs to be copied to it for processing and then copied back to main memory once finished. How comes using an explicit copy ( clEnqueRead/WriteBuffer ) is an order of magnitude slower, when in my mind it should be basically the same? Am I missing something?
Thanks.