views:

2729

answers:

8

Hello,

I've recently read a lot about software (mostly scientific/math and encryption related) that moves part of their calculation onto the GPU which causes a 100-1000 (!) fold increase in speed for supported operations.

Is there a library, API or other way to run something on the GPU via C#? I'm thinking of simple Pi calculation. I have a GeForce 8800 GTX if that's relevant at all (would prefer card independent solution though).

Any hints are appreciated!

+14  A: 

It's a very new technology, but you might investigate CUDA. Since your question is tagged with C#, here is a .Net wrapper.

As a bonus, it appears that your 8800 GTX supports CUDA.

Charlie Salts
+6  A: 

You might want to look at this question

You're probably looking for Accelerator, but if you are interested in game development in general I'll suggest you take a look at XNA

Tchami
I had not heard of this one - interesting! I worry that it's still a research project though. Is it ready for commercial applications?
Charlie Salts
To be perfectly honest I haven't tried it out and I don't know how stable it is. I do some graphics programming and had it in my bookmarks for future reference, and it seemed to be easier to approach than CUDA for this problem.
Tchami
I've got an ATI card so maybe I'll give Accelerator a try.
Charlie Salts
+3  A: 

CUDA.NET should be exactly what you're looking for, and it seems to support your specific graphics card.

Michael Borgwardt
+2  A: 

You can access the latest Direct3D APIs from .NET using the Windows API Code Pack. Direct3D 11 comes with Compute Shaders. These are roughly comparable to CUDA, but work also on non-NVIDIA GPUs.

Note that Managed DirectX and XNA are limited to the Direct3D 9 feature set, which is somewhat difficult to use for GPGPU.

Malte Clasen
+1  A: 

There is a set of .Net bindings for Nvidia's CUDA api, it's called CUDA.net. You can refer to the reference guide to look at some sample C# code.

The preferred way to access your co-proccesor (GPU) would be using OpenCL so that your code would be portable with ATI cards, but I believe there may be additional coding required and I'm not sure how much support OpenCL has for the .Net platform.

If you want to use C++, here's a quick overview on how to get some sample code compiling with Visual Studio.

Darwyn
+12  A: 

Another option that hasn't been mentioned for GPU calculation from C# is Brahma.

Brahma provides a LINQ-based abstraction for GPU calculations - it's basically LINQ to GPU. It works over OpenGL and DirectX without extra libraries (but requires SM3). Some of the samples are fairly amazing.

Reed Copsey
A: 

FYI: Accelerator (http://research.microsoft.com/en-us/projects/Accelerator/) was working great for a couple of tests.

Alex
A: 

hi,

Thanks for all the answers to this. I am working on a ANN program that is prosessing millions (sometimes billions) of floating point calculations every training inneration.

The last version runs just in software, and i'm looking at anyware between 5 and 45 mins per itteration on my cheepo dev PC :)

Obviously this is really imptactical unless its possible to dump all of this onto the graphics card. At the moment, its jsut a bit of a pet project, so its not such a problem to have a box going overnight crunching data.

The ultimate goal is to impliment run-time training on the network, and with the massive overhead this is impossable.

Once again, Thanks guys! I'll be looking into the solutions you've all posted!

Taylor