views:

102

answers:

3

I want to display very high resolution video directly with OpenGL.

The image data is going to be processed on the GPU and I want to avoid a round-trip back to the PC to show the video in a standard bitmap based window.
Cross platform is nice, Windows only would be OK (so would nvidia only)

Anyone have any links to ways doing this?

There is a poor NeHe tutorial and a few examples for embedded openGL widgets in Qt but I need much better performance and much larger images.

(Bonus question - the ability to send the output directly to the second output on the card would be nice.)

+2  A: 

The obvious thing to do with OpenGL would be to display the bitmap as a texture.

Jerry Coffin
Yes - I just wondered if there were some video specific performance tricks, rather than just copying each frame to a full screen quad
Martin Beckett
@Martin: Not usually -- as long as you're keeping all the memory shuffling on the card itself, almost any reasonable video card can pretty easily provide the bandwidth to rewrite every bit on the screen at the maximum refresh rate (especially now, since LCDs almost never refresh faster than 60 Hz --it was harder with high end CRTs that did 100+ Hz refresh).
Jerry Coffin
Thats the problem - I need to do 1080p at 120Hz (and ideally two of them!)
Martin Beckett
@Martin: I wouldn't be a lot on its being a problem anyway. I managed to keep up with ~100 Hz refresh using a GeForce 5800 driven by a Pentium III. The resolution was *somewhat* lower, but not drastically, and a modern card is *well* over twice as fast.
Jerry Coffin
+1  A: 

So you want to send your video on a texture and process it with fragment shader? Here's one short tutorial how to do something similar. It's just a simple OpenGL 2.0 example that creates 2 window size textures and mixes them in fragment shader. There's no video involved but shouldn't be hard to modify if you already have means to decode it.

Ivan Baldin
+2  A: 

Assuming OpenGL 2.1, use a buffer object of type GL_PIXEL_UNPACK_BUFFER to stream pixel data to a texture. It's faster than uploading data every frame as the implementation might use DMA for copying when you use glMapBuffer, glMapBufferRange (OpenGL 3.2) or call glBufferData directly. You can also copy several frames in each batch to get a tradeoff between copy-overhead and mapping overhead. Last, create a shader to convert YUV or YCbCr to RGB and display the texture with a triangle strip.

Mads Elvheim
Any reason for the triangle over a fullscreen quad?
Martin Beckett
Yes, quads suck in every respect and should not be used, Just forget they ever existed.
Mads Elvheim