views:

359

answers:

4
+6  Q: 

Learn Nvidia CUDA

I am C++ programmer that develop image and video algorithims, should i learn Nvidia CUDA? or it is one of these technlogies that will disappear?

+2  A: 

I think you should rather stick with OpenCL, which is an open standard and supported by ATI, nVidia and more. CUDA might not disappear in the next years, but anyway it is not compatible with non-nVidia GPUs.

AndiDog
Why the downvote???
AndiDog
+3  A: 

CUDA should stick around for a while, but if you're just starting out, I'd recommend looking at OpenCL or DirectCompute. Both of these run on ATI as well as NVidia hardware, in addition to also working on the vector units (SSE) of CPUs.

codekaizen
+7  A: 

CUDA is currently a single vendor technology from NVIDIA and therefore doesn't have the multi vendor support that OpenCL does.

However, it's more mature than OpenCL, has great documentation and the skills learnt using it will be easily transferred to other parrallel data processing toolkit.

As an example of this, read the Data Parallel Algorithms by Steele and Hillis and then look at the Nvidia tutorials - theres a clear link between the two yet the Steele/Hillis paper was written over 20 years before CUDA was introduced.

Finally, the FCUDA Projects is working to allow CUDA projects to target non nvidia hardware (FPGAs).

Robert Christie
+2  A: 

OpenCL might take sometime to become pervasive but i found learning CUDA very informative and i don't think CUDA's going to be out of the limelight anytime soon. Besides, CUDA is easy enough that the time it takes to learn it is much shorter than CUDA's shelf life.

Raymond Tay