views:

112

answers:

2

Feel free to correct me if any part of my understanding is wrong.

My understanding is that GPUs offer a subset of the instructions that a normal CPU provides but executes them much faster.

I know there are ways to utilize GPU cycles for non-graphical purpose, but it seems like (in theory) a language that's Just In Time compiled could detect the presence of a suitable GPU and offload some of the work to the GPU behind the scenes without code change.

Is my understanding naive? Is it just a matter of it's really complicated and just hasn't been done it?

+5  A: 

My understanding is that GPUs offer a subset of the instructions that a normal CPU provides but executes them much faster.

It's definitly not as simple. The GPU is tailored mainly at SIMD/vector processing. So even though the theoretical potential of GPUs nowadays is vastely superior to CPUs, only programs that can benefit from SIMD instructions can be executed efficiently on the GPU. Also, there is of course a performance penalty when data has to be transfered from the CPU to the GPU to be processed there.

So for a JIT compiler to be able to use the GPU efficiently, it must be able to detect code that can be parallelized to benefit from SIMD instructions and then has to determine, if the overhead induced by transfering data from the CPU to the GPU will be outweight by the performance improvements.

inflagranti
That's right, I forgot that a lot of GPU speed comes from parallelization with modern GPUs having hundreds of (logical?) cores. That does reduce the usefulness since very few apps are set up for that.
Davy8
A: 

It is possible to use GPU (e.g., a CUDA- or OpenCL-enabled one) to speed up JIT itself. Both register allocation and instruction scheduling could be efficiently implemented.

SK-logic