views:

164

answers:

5

Hello,

I am interested in using F# for numerical computation. How can I access the GPU using NVIDIA's CUDA standart under F#?

+2  A: 

Accelerator from MS allows you to leverage the GPUs, so can do something like this, though you cant use CUDA.

jasper
Curious that the Accelerator documentation makes no mention of CUDA...
Eric
Yeah right. Accelerator is based on DX9, not CUDA...
Stringer Bell
+3  A: 

As an alternative, you could consider using DirectCompute. The three big GPU APIs: CUDA, OpenCL and DirectCompute, are all very similiar. DirectCompute can easily be accessed from F# via SlimDX, a .NET wrapper for DirectX.

Johan
-1 DirectCompute is the newest, and least-well documented of the three GPU APIs. It's not really recommendable right now.
Eric
+1 DirectCompute documentation is actually great.
Stringer Bell
+1  A: 

You might look into CUDA.NET. It would let you use CUDA straight from F#. It can be found here: http://www.hoopoe-cloud.com/Solutions/CUDA.NET/Default.aspx

The other usual alternative for using CUDA from managed code is to encapsulate the CUDA functionality in a native DLL and then either P/Invoke that or write a C++/CLI wrapper around that, which you then use from e.g. your F# program.

Eric
+5  A: 

I agree with jasper that the easiest option currently is to use Accelerator from Microsoft Research. I wrote a series of articles about using from F#. A simple and direct introduction, Game of Life example, more advanced example using quotations and an example of using advanced quotation features. Satnam Singh's blog is also a great resource with some F# demos.

One problem with current graphics cards is that they do not support integers (as a result, Accelerator supports them only when running using optimized x64 parallel engine). Also, current graphics cards don't imeplement floating point numbers according to the IEEE standards - they are trying to be faster by doing a bit of "guessing", which doesn't matter when calculating triangle position, but could be an issue if you're dealing with financial calculations. (Accelerator can use various targets, so you're safe if you're using x64 parallel engine).

As far as I know, DirectCompute will require a precise implementation of floating point arithmetics as well as direct support for integers, so that may be a good choice in the future (or if Accelerator eventually starts using DirectCompute as their engine).

Tomas Petricek
Are you sure the floating point issue still persists on the new NVDIA Fermi generation (from GTX 460 on)? They claimed having introduced improved support of double precision arithmetics on Fermi.
Martin
@Martin: I'm not sure about the latest generation of GPUs. Perhaps they already fixed this - it would be useful to have some clear guarantees.
Tomas Petricek
I can assure you that all Dx10 and Dx11 generation support 32 bits integer precision. Latest Dx11 gen (both AMD/ATI and NVIDIA suppport IEEE 754-2008) that means fused-multiply-add and all the fancy stuff.
Stringer Bell
+1  A: 

Probably only hardcore GPU geeks like me have heard about it. Tidepowerd has made GPGPU possible for CIL-based languages (such as F#, C#, VB.NET, whatever). On the other hand you can do the same for sole F# language with a Quotation-to-GPU runtime/API (looking forward to see someone implement that). This is something Agent Smith has bloged about or that is also mentioned in F# expert 1.0 book (Language Oriented Programming chapter) AFAIK.

Agent Smith (ok, sorry for that) is speaking about NVIDIA Cg. But you can do same using HLSL DirectCompute shaders or OpenCL C99.. PTX (low level NVIDIA IL), CAL-IL (low level AMD/ATI IL)...

Stringer Bell