tags:

views:

54

answers:

3

Hi, I have a project that requires lots of image processing and wanted to add GPU support to speed things up.

I was wondering if i compiled my matlab into c++ shared library and called it from within OpenCL program, does that mean that the matlab code is going to be run on GPU?

+1  A: 

My own (semi-educated) guess is that you are going to find this very difficult to do. But, others have trodden the same path. This paper might be a good place to start your research. And Googling turned up Accelereyes and a couple of references to items on the Mathworks File Exchange which you might want to follow up.

High Performance Mark
Accelereyes is for NVIDIA cards only, i'm using ATIThe iptatiproject project seems promising, i'll check it out thanks
OSaad
yikes!!!! just as i feared, i'll have to write the algorithms by hand in order to be able to do it using OpenCL. I mean this way i'll be just better off using c++ and not matlab, cause all i'll have left from matlab is just some matrix datastructures :Dthough i was wondering how Accelereyes did their wrapper for nvidia hmmm
OSaad
Accelereyes is just a wrapper that overloads the basic operators of matrices like multiplication, inversion etc. to have them executed on GPU instead, this of course does give a very good boost but its not the most efficient way of doing things since the algorithms themselves r not written to take full advantage of the parallel capabilities of the GPU like using parallel sort algorithms instead of iterative or recursive ones, but this means a whole rewrite of the matlab library :D
OSaad
A: 

The Parallel Computing Toolbox in the upcoming release R2010b (due September 1st) supports GPU processing for several functions. Unfortunately, it only supports CUDA (version 1.3 and later), so with an ATI graphics card, you're out of luck. However, you may just want to buy a dedicated GPU, anyway.

Jonas
+1  A: 

At AccelerEyes, we built full GPU runtime (memory manager, JIT compiler, a big library of functions, and a multi-GPU multiplexer) and then integrated them into MATLAB using the standard MEX interface.

We currently only support CUDA code (hence NVIDIA only). You can integrate any custom CUDA code into MATLAB via the Jacket SDK and your CUDA code will inherit the optimizations of the Jacket runtime.

We do have our eye on OpenCL. For our thoughts on OpenCL, see http://blog.accelereyes.com/blog/2010/05/10/nvidia-fermi-cuda-and-opencl/ and http://blog.accelereyes.com/blog/2008/12/30/opencl/ . As OpenCL matures (or whatever emerges from Intel's Larrabee) and especially as FFT/BLAS/LAPACK libraries are built for OpenCL and other languages, we'll make sure Jacket code can run without any further modifications on those platforms.

melonakos
in order to use the GPU efficiently, one has to write the algorithms using C to operate in a parallel fashion, Accelereyes' jacket is just a wrapper doing some sort of overloading the building set of operators of matrices in matlab to have the matrices multiplied, added, inverted etc. on GPU, but to take full advantage, algorithms must be rewritten to operate in parallel like using parallel sorting algorithms and so on. this of course would mean full re-writing of the matlab library :D. but jacket is still a good solution.
OSaad
To be clear, we are undertaking a full re-write of the MATLAB libraries so that all the algorithms are re-written to be data parallel and fast on GPUs. So in that sense, there's a lot more to Jacket than just a wrapper. For instance, Jacket's JIT compiler can translate your M-code to efficient GPU-optimized kernels. Jacket also contains a parallel version of SORT (http://wiki.accelereyes.com/wiki/index.php/SORT). Jacket's main limitations are that we've not done all the MATLAB functions yet. Here are the ones that we've done: http://wiki.accelereyes.com/wiki/index.php/Jacket_Function_List
melonakos

related questions