views:

194

answers:

2

I'm trying to follow the directions from this page:
http://www.opengl.org/resources/faq/technical/color.htm
regarding rendering primitives with a unique color

I've checked the number of bits for every color and the result was 8 for each.
When calling:

 glColor3ui(0x80000000, 0, 0xFF000000);

and reading back the pixel with glReadPixels() I get the color: 0xFFFE007F
which corresponts to R=0x7F, G=0, B=0xFE
The two lower bits of Red and Blue are wrong.

Why is that?
I'm using a brand new nVidia card on a dell laptop with the most current drivers

A: 

Have you done this, too?

In either event, you'll need to ensure that any state that could
affect the final color has been disabled. The following code will
accomplish this:

glDisable (GL_BLEND); glDisable (GL_DITHER);
glDisable (GL_FOG); glDisable (GL_LIGHTING);
glDisable (GL_TEXTURE_1D); glDisable (GL_TEXTURE_2D);
glDisable (GL_TEXTURE_3D); glShadeModel (GL_FLAT);

Also check if your glReadPixels buffer has 24/32 bits.

schnaader
+2  A: 

It turns out that the FAQ has a mistake.

The documentation of glColor states that: "Unsigned integer color components, when specified, are linearly mapped to floating-point values such that the largest representable value maps to 1.0 (full intensity), and 0 maps to 0.0 (zero intensity)."
This actually suggests that to get full intensity white I should call:

glColor3ui(0xFFFFFFFF, 0xFFFFFFFF, 0xFFFFFFFF);

and not

glColor3ui(0xFF000000, 0xFF000000, 0xFF000000);

as the FAQ suggests.
And that explains why 0xFF000000 mapped to 254.

I have filed a bug report to the people supposedly maintaining the FAQ

shoosh