views:

614

answers:

4

Does CUDA support double precision floating point numbers ? Also need reasons for the same.

A: 

CUDA by default does not support double-precision floating point arithmetic, and the CUDA compiler silently converts doubles into floats inside of kernels.

Resource: Cuda support

faya
-1: this is out-of-date information. Compute capability 1.3 has double precision support.
Paul R
+3  A: 

If your GPU has compute capability 1.3 then you can do double precision. You should be aware though that 1.3 hardware has only one double precision FP unit per MP, which has to be shared by all the threads on that MP, whereas there are 8 single precision FPUs, so each active thread has its own single precision FPU. In other words you may well see 8x worse performance with double precision than with single precision.

Paul R
+3  A: 

Following on from Paul R's comments, Compute Capability 2.0 devices (aka Fermi) have much improved double-precision support, with performance only half that of single-precision.

This Fermi whitepaper has more details about the double performance of the new devices.

Edric
+1: thanks for that additional info - I haven't worked with CUDA for about a year now and wasn't aware of Compute Capability 2.0 - nothing in tech stays still for very long !
Paul R
Be aware though that Fermi's double precision performance is (artificially) lower for GeForce cards than for Teslas. Quadro cards should have the same performance level as Tesla cards.
Eric
A: 

As a tip:

If you want to use double precision you have to set the gpu architecture to sm_13 (if your gpu supports it).

Otherwise it will still convert all doubles to floats and gives only a warning (as seen in faya's post). (Very annoying if you get a error because of this :-) )

The flag is: -arch=sm_13

RandomlyGenerated