views:

70

answers:

2

Hi guys.

As most of you know CPUs are not well designed to do floating point calculation in contrast to GPUs. I am wondering how to use GPU's power without any abstraction layer or driver. Can I program for a GPU using assembly, C, C++ language (I mean how?). Although assembly seems to help me access the gpu directly, C/C++ are likely to need a medium library (e.g. OpenCL) to access the GPU.

Let me ask you another question: How much of a modern GPU's capability will be exposed to a programmer without any third-party driver?

+3  A: 

The interfaces aren't documented, so something like OpenCL is the only practical way to program the GPU directly. Without a driver, you'd be stuck trying to reverse engineer the complete functioning of the GPU on your own.

wrosecrans
Yes, in a practical sense, it's impossible.
Matias Valdenegro
It's not impossible, it's just REALLY, REALLY, REALLY, a bad idea. To access the hardware you have to essentially write a driver. And spend weeks/months/years learning how the hardware works.
NoMoreZealots
<nitpick> AFAIK the interfaces are pretty well documented for current Intel and ATI chips. VIA documentation, otoh, is just a register list.</nitpick>
ninjalj
A: 

Well, essentially, you would have to write a driver on either Windows or Linux. And the interfaces may be documented depending on which chipset you are trying to use. Intel has loads of PDF documentation on there website. However, this is a non trivial exercise at best and your code would only be able to used on that set of hardware. Meerly reading and understanding the documentation will take a bit of doing in most cases because "OOPs that's not how it really works" and how-tos do this or that aren't documented just the hardware and registers. However if REALLY want to do this your best bet would be to start with open source drivers on Linux for a particular chipset and tweek the to your SICK TWISTED purpose. All in all, other than for the learning aspect, it's prob a BAD idea.

NoMoreZealots