views:

3571

answers:

4

Hello,

I'm starting to use CUDA at the moment and have to admit that I'm a bit disappointed with the C API. I understand the reasons for choosing C but had the language been based on C++ instead, several aspects would have been a lot simpler, e.g. device memory allocation (via cudaMalloc).

My plan was to do this myself, using overloaded operator new with placement new and RAII (two alternatives). I'm wondering if there are any caveats that I haven't noticed so far. The code seems to work but I'm still wondering about potential memory leaks.

The usage of the RAII code would be as follows:

CudaArray<float> device_data(SIZE);
// Use `device_data` as if it were a raw pointer.

Perhaps a class is overkill in this context (especially since you'd still have to use cudaMemcpy, the class only encapsulating RAII) so the other approach would be placement new:

float* device_data = new (cudaDevice) float[SIZE];
// Use `device_data` …
operator delete [](device_data, cudaDevice);

Here, cudaDevice merely acts as a tag to trigger the overload. However, since in normal placement new this would indicate the placement, I find the syntax oddly consistent and perhaps even preferable to using a class.

I'd appreciate criticism of every kind. Does somebody perhaps know if something in this direction is planned for the next version of CUDA (which, as I've heard, will improve its C++ support, whatever they mean by that).

So, my question is actually threefold:

  1. Is my placement new overload semantically correct? Does it leak memory?
  2. Does anybody have information about future CUDA developments that go in this general direction (let's face it: C interfaces in C++ s*ck)?
  3. How can I take this further in a consistent manner (there are other APIs to consider, e.g. there's not only device memory but also a constant memory store and texture memory)?


// Singleton tag for CUDA device memory placement.
struct CudaDevice {
    static CudaDevice const& get() { return instance; }
private:
    static CudaDevice const instance;
    CudaDevice() { }
    CudaDevice(CudaDevice const&);
    CudaDevice& operator =(CudaDevice const&);
} const& cudaDevice = CudaDevice::get();

CudaDevice const CudaDevice::instance;

inline void* operator new [](std::size_t nbytes, CudaDevice const&) {
    void* ret;
    cudaMalloc(&ret, nbytes);
    return ret;
}

inline void operator delete [](void* p, CudaDevice const&) throw() {
    cudaFree(p);
}

template <typename T>
class CudaArray {
public:
    explicit
    CudaArray(std::size_t size) : size(size), data(new (cudaDevice) T[size]) { }

    operator T* () { return data; }

    ~CudaArray() {
        operator delete [](data, cudaDevice);
    }

private:
    std::size_t const size;
    T* const data;

    CudaArray(CudaArray const&);
    CudaArray& operator =(CudaArray const&);
};

About the singleton employed here: Yes, I'm aware of its drawbacks. However, these aren't relevant in this context. All I needed here was a small type tag that wasn't copyable. Everything else (i.e. multithreading considerations, time of initialization) don't apply.

+2  A: 

You seem to have charged in telling us what you are planning to do with explain your use cases for the data (this is probably because it is obvious to you).

You mean “without”? Right, sorry. CUDA is a GPGPU programming language from NVIDIA, built on top of C/C++ by providing a frontend for GCC. My question is primarily directed at people who already know the ins and outs of it. My usage is pretty arbitrary, the question is really more concerned with CUDA because CUDA only offers you a C interface and thus forces you to forego a lot of useful C++ features even if you work in C++ anyway.

The only C API I see so far are cudaMalloc and cudaFree. … Can't you just wrap these inside the constructor/destructor of your CudoClass.

Yes … and no. That's more or less what I'm doing at the moment but I'm not satisfied with it. My question is actually threefold (I'll update the question accordingly):

  1. Is my placement new overload semantically correct? Does it leak memory?
  2. Does anybody have information about future CUDA developments that go in this general direction (let's face it: C interfaces in C++ s*ck)?
  3. How can I take this further in a consistent manner (there are other APIs to consider, e.g. there's not only device memory but also a constant memory store and texture memory)?

Apart from the Malloc and Free what other API's are there? I am presuming they allocate memory and copy data from the device into the newly allocated memory?

Yes … in fact, I've just thought of a way of encapsulating the cudaMemcpy functionality as well. ;-)

Do you just want to see the raw data as arrays of some specific type? Or are the other operations you want to perform?

Actually, once the memory is initialized and some data copied into it (see cudaMemcpy above), I'm pretty much done. The rest of the action takes part of the GPU where I only need some basic array accesses. The very basic workflow here is:

  1. Allocate device memory,
  2. Copy your data to device memory,
  3. Invoke the (parallel) GPU action that processes the memory,
  4. Copy data back to RAM.

Step 3 is pretty much set in stone.

Konrad Rudolph
+4  A: 

I would go with the placement new approach. Then I would define a class that conforms to the std::allocator<> interface. In theory, you could pass this class as a template parameter into std::vector<> and std::map<> and so forth.

Beware, I have heard that doing such things is fraught with difficulty, but at least you will learn a lot more about the STL this way. And you do not need to re-invent your containers and algorithms.

coryan
Hadn't thought about an allocator. I've actually done this before so it shouldn't be too difficult.
Konrad Rudolph
+1  A: 

There are already two projects who attempt something similar:

In the meantime, however, I've implemented my allocator and it works really well and was completely straigtforward (> 95% boilerplate code).

Konrad Rudolph
+1  A: 

In the meantime there were some further developments (not so much in terms of the CUDA API, but at least in terms of projects attempting an STL-like approach to CUDA data management).

Most notably there is a project from NVIDIA research: thrust

kynan