views:

966

answers:

8

What do you think the future of GPU as a CPU initiatives like CUDA are? Do you think they are going to become mainstream and be the next adopted fad in the industry? Apple is building a new framework for using the GPU to do CPU tasks and there has been alot of success in the Nvidias CUDA project in the sciences. Would you suggest that a student commit time into this field?

+1  A: 

I think it's the right way to go.

Considering that GPUs have been tapped to create cheap supercomputers, it appears to be the natural evolution of things. With so much computing power and R&D already done for you, why not exploit the available technology?

So go ahead and do it. It will make for some cool research, as well as a legit reason to buy that high-end graphic card so you can play Crysis and Assassin's Creed on full graphic detail ;)

Jon Limjap
+3  A: 

First of all I don't think this questions really belongs on SO.

In my opinion the GPU is a very interesting alternative whenever you do vector-based float mathematics. However this translates to: It will not become mainstream. Most mainstream (Desktop) applications do very few floating-point calculations.

It has already gained traction in games (physics-engines) and in scientific calculations. If you consider any of those two as "mainstream", than yes, the GPU will become mainstream.

I would not consider these two as mainstream and I therefore think, the GPU will raise to be the next adopted fad in the mainstream industry.

If you, as a student have any interest in heavily physics based scientific calculations, you should absolutely commit some time to it (GPUs are very interesting pieces of hardware anyway).

Mo
Opinion, but no answer...
leppie
Considering that super computers are built for the sole purpose of scientific calculations, and video games lead the development of graphics, AI, and physics intensive application (especially all at once) I don't know how you could consider them not main stream. But I do agree, GPU will never replace the CPU. GPU's just don't have the flexability.
Narcolapser
A: 

With so much untapped power I cannot see how it would go unused for too long. The question is, though, how the GPU will be used for this. CUDA seems to be a good guess for now but other techologies are emerging on the horizon which might make it more approachable by the average developer.

Apple have recently announced OpenCL which they claim is much more than CUDA, yet quite simple. I'm not sure what exactly to make of that but the khronos group (The guys working on the OpenGL standard) are working on the OpenCL standard, and is trying to make it highly interoperable with OpenGL. This might lead to a technology which is better suited for normal software development.

It's an interesting subject and, incidentally, I'm about to start my master thesis on the subject of how best to make the GPU power available to the average developers (if possible) with CUDA as the main focus.

Morten Christiansen
Have you seen GPU++, it came from a thesis similar to what you're about to begin with. Might give you a bump start.
gbjbaanb
Thank you, it appears to be an interesting read.
Morten Christiansen
+2  A: 

Its one of those things that you see 1 or 2 applications for, but soon enough someone will come up with a 'killer app' that figures out how to do something more generally useful with it, at superfast speeds.

Pixel shaders to apply routines to large arrays of float values, maybe we'll see some GIS coverage applications or well, I don't know. If you don't devote more time to it than I have then you'll have the same level of insight as me - ie little!

I have a feeling it could be a really big thing, as do Intel and S3, maybe it just needs 1 little tweak adding to the hardware, or someone with a lightbulb above their head.

gbjbaanb
+5  A: 

Long-term I think that the GPU will cease to exist, as general purpose processors evolve to take over those functions. Intel's Larrabee is the first step. History has shown that betting against x86 is a bad idea.

Study of massively parallel architectures and vector processing will still be useful.

Mark Ransom
Thanks for the answer, It made me think differently about the subject.
Liran Orevi
+8  A: 

Commit time if you are interested in scientific and parallel computing. Don't think of CUDA and making a GPU appear as a CPU. It only allows a more direct method of programming GPUs than older GPGPU programming techniques.

General purpose CPUs derive their ability to work well on a wide variety of tasks from all the work that has gone into branch prediction, pipelining, superscaler, etc. This makes it possible for them to achieve good performance on a wide variety of workloads, while making them suck at high-throughput memory intensive floating point operations.

GPUs were originally designed to do one thing, and do it very, very well. Graphics operations are inherently parallel. You can calculate the colour of all pixels on the screen at the same time, because there are no data dependencies between the results. Additionally, the algorithms needed did not have to deal with branches, since nearly any branch that would be required could be achieved by setting a co-efficient to zero or one. The hardware could therefore be very simple. It is not necessary to worry about branch prediction, and instead of making a processor superscaler, you can simply add as many ALU's as you can cram on the chip.

With programmable texture and vertex shaders, GPU's gained a path to general programmability, but they are still limited by the hardware, which is still designed for high throughput floating point operations. Some additional circuitry will probably be added to enable more general purpose computation, but only up to a point. Anything that compromises the ability of a GPU to do graphics won't make it in. After all, GPU companies are still in the graphics business and the target market is still gamers and people who need high end visualization.

The GPGPU market is still a drop in the bucket, and to a certain extent will remain so. After all, "it looks pretty" is a much lower standard to meet than "100% guaranteed and reproducible results, every time."

So in short, GPU's will never be feasible as CPU's. They are simply designed for different kinds of workloads. I expect GPU's will gain features that make them useful for quickly solving a wider variety of problems, but they will always be graphics processing units first and foremost.

It will always be important to always match the problem you have with the most appropriate tool you have to solve it.

mch
+1 "After all, "it looks pretty" is a much lower standard to meet than "100% guaranteed and reproducible results, every time."" perfectly said!
Blindy
+1 for nice explanation
0x69
+2  A: 

GPU's will never supplant CPU's. A CPU executes a set of sequential instructions, and a GPU does a very specific type of calculation in parallel. These GPU's have great utility in numerical computing and graphics; however, most programs can in no way utilize this flavor of computing.

You will soon begin seeing new processers from Intel and AMD that include GPU-esque floating point vector computations as well as standard CPU computations.

temp2290
A: 

I just built my own server with two tesla C1060s and an i7 920 -- I still don't regret purchasing the C1060's, but would ultimately agree with Mark Ransom ... for what it's worth, and I am certainly not an expert (but I have been heavily researching the subject while doing research and architectural planning for my next project), here are my two cents:

I would suggest researching threading and caching as it's done on a GPU vs. a CPU level, but with the .Net 4.0 release, the framework is there to interact with my GPU's by use of the TaskScheduler.FromCurrentSynchronizationContext() method and ThreadLocal class

I would be suprised if GPU language developers don't latch on to this and provide a method for simple integration of threads and memory allocation

Considering I got my GPUs at half the usual price (developers special), I'm very happy with the purchase as it's the most efficient way for me to attain the computational speeds I need with the amount of money I had to spend (or at least the most efficient way, that I am aware of at this point in time)

anyways, I.M.H.O., this is a significant steps towards simplifying integration between code and new hardware architectures, it makes your code more dynamic (in terms of interoperability) and, hopefully, future releases will include automatic optimization between CPU and GPU capabilities on the current machine...

I'd be really interested to hear some other thoughts on this...

SoundLogic
GPU threads are vastly different to CLR threads and OS threads (and other threads on the CPU). To use a GPU effectively, it needs a different framework from CLR threads.So instead of treating GPU threads as CLR threads, it seems appropriate to use OpenCL or CUDA or Accelerate from .Net. I wouldn't expect much more than this from future versions .Net - at most we might see some slightly higher-level wrappers.
RD1