I have a neural network written in Erlang, and I just bought a GeForce GTX 260 card with a 240 core GPU on it. Is it trivial to use CUDA as glue to run this on the graphics card?
views:
1454answers:
2No, using CUDA is not a trivial matter.
The CUDA programming model basically uses C (with some additions) but in order to get the most of the GPGPU's capabilities you would have to ensure that your algorithms follow the CUDA guidelines. (see NVidia CUDA Programming Guide)
For example in order to get the best memory performance (somewhere around 70Gbps) you need to access memory in streaming mode with coalescing, also branches are very costly on the GPUs so you should avoid conditionals as much as possible. Check out the guide and samples provided with the SDK, they'll provide an excellent starting point
I wish I could tell you how to do this with Erlang... ;-), but at least, Satnam Singh at MS Research has done some very interesting work with Haskell (Lava) and F#. Perhaps this paper can give you some intuition for how it could be done: