Hi guys.
As most of you know CPUs are not well designed to do floating point calculation in contrast to GPUs. I am wondering how to use GPU's power without any abstraction layer or driver. Can I program for a GPU using assembly, C, C++ language (I mean how?). Although assembly seems to help me access the gpu directly, C/C++ are likely to need a medium library (e.g. OpenCL) to access the GPU.
Let me ask you another question: How much of a modern GPU's capability will be exposed to a programmer without any third-party driver?