views:

1454

answers:

2

I have a neural network written in Erlang, and I just bought a GeForce GTX 260 card with a 240 core GPU on it. Is it trivial to use CUDA as glue to run this on the graphics card?

+13  A: 

No, using CUDA is not a trivial matter.

The CUDA programming model basically uses C (with some additions) but in order to get the most of the GPGPU's capabilities you would have to ensure that your algorithms follow the CUDA guidelines. (see NVidia CUDA Programming Guide)

For example in order to get the best memory performance (somewhere around 70Gbps) you need to access memory in streaming mode with coalescing, also branches are very costly on the GPUs so you should avoid conditionals as much as possible. Check out the guide and samples provided with the SDK, they'll provide an excellent starting point

SpaceghostAli
Yup, thats a lot of work. To get a significant speedup you'll have to understand how to arrange the data and the concept of half-warps and coalescing as mentioned. Also I believe the target machine code changes according to the GPU series...
Sushant
+1  A: 

I wish I could tell you how to do this with Erlang... ;-), but at least, Satnam Singh at MS Research has done some very interesting work with Haskell (Lava) and F#. Perhaps this paper can give you some intuition for how it could be done:

http://research.microsoft.com/en-us/people/satnams/

Broken link. Satnam's homepage is here:http://research.microsoft.com/en-us/people/satnams/
Ade Miller