I've been trying to find info on performance of using float vs double on graphics hardware. I've found plenty of info on float vs double on CPUs, but such info is more scarce for GPUs.
I code with OpenGL, so if there's any info specific to that API that you feel should be known, let's have at it.
I understand that if the program is moving a lot of data to/from the graphics hardware, then it would probably be better to use floats as doubles would require twice the bandwidth. My inquiries are more towards how the graphics hardware does it's processing. As I understand it, modern Intel CPUs convert float/double to an 80-bit real for calculations (SSE instructions excluded) and both types are thus about equally fast. Do modern graphics cards do any such thing? is float and double performance about equal now? Are there any strong reasons to use one over the other?