views:

469

answers:

3

Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

+1  A: 

Since you're okay with using extensions, but you're worried about going all-out with OpenGL 2.0, consider providing a simple fragment shader with the old-school ARB_fragment_program extension.

Alternatively, you could use a library like DevIL, ImageMagick, or FreeImage to perform the conversion for you.

prideout
Using a library like DevIL means doing the conversion on the CPU, which is what he is trying to avoid.
Amuck
I can't use one of the packages because it is BINK movie data so it occurs each and every frame. I suppose I could look into ARB_fragment_program.
Steven Behnke
Apologies, my eyes skipped over the "hardware based" part of the question.
prideout
A: 

OpenGL 1.1 is ancient. Get up to date, use a shader.

The other GL tricks for performing color conversions (color matrix) have been superceded (spl?) by fragment shaders a long time ago.

I suppose you could use a dependent texture read, but that'd require a 3d texture just to do some relatively linear math, so it's just jumping through hoops in a bad way.

Marcus Lindblom
Telling me to get up to date when I've already said I cannot in the given timeframe really doesn't help.
Steven Behnke
A: 

the MESA extension you mention is for YCrCb ? If your nvidia card does not expose it, it means they did not expose the support for that texture format (it's the way to say the card supports it).

Your only option is to do the color conversion outside of the texture filtering block. (prior to submitting the texture data to GL, or after getting the values out of the texture filtering block)

GL can still help, as doing the linear transform is doable in GL1.1, provided you have the right extensions (dot3 texture combiner extension). That said, it's far from the panacea.

For what it's worth, the BINK implementation seems like it's doing the conversion with the CPU, using MMX (that's reading between the lines though). I would probably do the same, converting with SSE prior to loading to OpenGL. A CPU is fast enough to do this every frame.

Bahbar