views:

354

answers:

5

Hi,

It looks like GL has become mainstream for all gaming platforms (even handheld!) This has pushed the deployment of modern GPU chipsets to large numbers of consumers.

This is amazing.

With the modern GPU systems out there now, is it possible to do generic old-school graphics programming (aka - blit from X rect to Y rect using VRAM)? (Think Amiga) Or are the operations centered around vertex and pixel shaders?

Is this accessable from GL? OpenGL ES?

Rendering a textured quad is OK, but it would require double buffering and a re-render of the entire scene. Was seeing if I could avoid this.

Thx.

+2  A: 

Well, you can use libSDL and get a pointer to the screen framebuffer and do whatever you want with the pixels. Or you cand do all your drawing to a memory buffer, load to a GL texture and draw textured quads which probably it's faster because of hardware acceleration.

Mr Shunz
I don't want a framebuffer, pointer, I want to use code on a GPU to do it all.
drudru
@drudru, have you heard of OpenCL. I think it has something to do with executing arbitrary code on certain Nvidia GPUs
Earlz
@Earls - yeah, that might be the ticket. I was hoping someone from that camp might chime in.
drudru
+1  A: 

It may be possible on some embedded systems to get a framebuffer pointer and write to it directly, but these days you're better off using OpenGL|ES and rendering a texture. It will be more portable, and probably faster.

You could create a buffer in main memory, do all the bit twiddling you want, and then render it as a texture. You can DMA your texture data to VRAM for speed, and then render it in a quad, which is equivalent to a blit, but doesn't use any CPU cycles and runs as fast as the GPU can process.

It's amazing what you can do with shaders and programmable pipelines these days.

gavinb
I don't want a framebuffer, pointer, I want to use code on a GPU to do it all.
drudru
Ok, have a look at the [render-to-texture technique](http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=36). You use shaders to render whatever you like to a texture, and then draw that texture using a quad, which will blit it to the screen. You can choose from a variety of ROPs too, based on the texture parameters and blend modes.
gavinb
ok, i'll take a look
drudru
A: 

If you're drawing textured quads, with which you can easily simulate "old school" blitting, then indeed the pixels are copied from video memory by the GPU. Also note that while bitmap operations are possible in OpenGL, they can be painfully slow because the 3D path is optimized on consumer grade video cards, whereas 2D paths may not be.

atis
how does this avoid doing a complete frame update?
drudru
Rendering quads has little to do with double buffering. You can use a single buffer if that's how things are supposed to be done on the target device. If the whole frame changes though, you're going to have to update everything anyway.
atis
true, but for my app, 90% of the operations are scroll. On old school hardware, this would just be a matter of moving the hardware VRAM pointer to a new location with a preset modulo. A fast blit wouuld do the trick as well. In my app, I can avoid a whole redraw. Thx for your answer though.
drudru
@drudru: in the old days you needed to scroll like that because you didn't have the processortime to redraw the frame. Nowadays with hardware, you can redraw the screen many times over and still have time to kill. Especially if you only do '2d' blits
Toad
+1  A: 

Check for BlitFramebuffer routine (Framebuffer Object). You need an updated driver.

Keep in mind you can till use the default framebuffer, but I think it will be more funny using framebuffer objects.

Keep your sprite in separate frambuffers (maybe rendered using OpenGL), and set them as read (using glReadBuffers) and blit them on the draw framebuffer (using glDrawBuffers). It's quite simple and fast.

Luca
A: 

Also, glCopyPixels seems to do the trick if you have the environment set correctly.

drudru