nVidia released their CUDA API allowing developers to utilize their graphics cards, taking advantage of the massively parallel architecture and vectorized operations. Libraries such as pyCUDA were created to allow developers of scripting languages to send selected code to the GPU.

And there has been a growing effort to design multi-lingual virtual machines, such as Parrot, on top of strongly-typed concurrency-friendly languages like Erlang.

So I wonder, are there any open source projects to code a virtual machine environment tailored to take full benefit of the GPU?

I would imagine that having a strongly-typed, monadically-secured concurrent environment for running major scripting languages able to take advantage of all the GPU has to offer, would be an extremely interesting field. But so far, I haven't found anything on Google.

Is anyone working on this?

Edit: I should perhaps have stated that rather than sharing a GPU, such projects might also target using a dedicated GPU.


No, because the graphics card is always hosted in a machine capable of taking care of the details leaving the full capacity of the card for the processing at hand, rather than using some of its limited resources for what amounts to maintenance chores.

In other words, the GPU isn't designed, and isn't very suitable, for VM work, script processing, etc, and these tasks would take up an extraordinary amount of resources to work well.

Adam Davis
+4  A: 

The reason no-one's trying to migrate full-on processes entirely onto the GPU is that it's not good at that kind of thing - branchy, unpredictable code is very much at odds with the average GPU's execution- and memory-model. Even Cell, whos SPEs are much more CPU-like and more capable of dealing with general-purpose code, still has a regular CPU component as well.

If GPUs were suited to this kind of thing then they wouldn't be GPUs, they'd be CPUs.

+2  A: 

Modern VMs such as Java and .NET actually support much richer features than GPUs currently do. Although you can get an incredible amount of raw computing power out of a GPU, there are basic features still missing, such as recursive function calls or function pointers. These are needed needed to implement functional or object oriented languages. GPUs will likely have these eventually, but they don't now.

That said, NVIDIA already has a public ISA called PTX. It should be possible to write a translation program to convert simple VM code to this language, so it can run on any NVIDIA GPU, but I don't know of any project that does this.

Jay Conrod

One major limiting factor with the current NVIDIA implementation of CUDA is that each device is only accessible from a single CPU thread. This makes it impossible to share a device between programmes on the same physical machine, let alone virtual machines.


A VM can't be used to access CUDA hardware because a VM virtualises the devices and doesn't expose PCIE and other buses that are important for efficient use of the device. There are some VM hacks that one can use, but they all have some security/stability issues.

One can use a jail in OpenSolaris/BSD to provide such guarantees, but there are no CUDA drivers for those operating systems.