views:

384

answers:

2

A lot of suggestions on improving the performance in iPhone games revolve around sending less data to the GPU. The obvious suggestion is to use GLshorts instead of GLfloat wherever possible, such as for vertices, normals, or texture coordinates.

What are the specifics when using a GLshort for a normal or a texture coordinate? Is it possible to represent a GLfloat texture coordinate of 0.5 when using a GLshort? If so, how do you do that? Would it just be SHRT_MAX/2? That is, does the range of 0 to 1 for a GLfloat map to 0 to SHRT_MAX when using a GLshort texture coordinate?

What about normals? I've always created normals with GLfloats and normalized them to unit length. When using a GLshort for a normal, are you sending a non-normalized vector to the GPU? If so, when and how is it normalized? By dividing by all components by SHRT_MAX?

+1  A: 

Normals

If you glEnable( GL_NORMALIZE ) then you can submit normals as GL_BYTE (or a short). A normal submitted as bytes 0xFF 0xCC 0x33 is normalized by the GPU to (0.77, 0.62, 0.15).

Note that there is a small performance penalty from GL_NORMALIZE because the GPU has to normalize each normal.

Another caveat with GL_NORMALIZE is that you can't do lighting trickery by using un-normalized normals.

Edit: By 'trickery', I mean adjusting the length of a normal in the source data (to a value other than 1.0) to make a vert brighter or darker.

Texture Coordinates

As far as I can tell, integers (bytes or shorts) are less useful for texture coordinates. There's no easy call instruct OpenGL to 'normalize' your texture coordinates. 0 means 0.0, 1 means 1.0, and 255 means 255.0 (for tiling). There's no way to specify fractional values in between.

However, don't forget about the texture matrix. You might be able to use it to transform the integers into useful texture coordinates. (I haven't tried.)

Jon-Eric
Awesome! What about texture coordinates? Are they also divided in the GPU when using GLshort or GLbyte?
Andrew Garrison
`GL_NORMALIZE` is only for normals. I'm just as curious as you regarding the texture coordinates.
Jon-Eric
Maybe GLbyte or GLshort for textures is just for those cases where you are texturing a quad, and you are only concerned with texture coordinates 0 and 1, and nothing in between. Just a guess.
Andrew Garrison
Could you elaborate on the "lighting trickery" for un-normalized normals? What would I be missing out on, exactly?
Andrew Garrison
@AndrewGarrison The lighting trickery is I *think* you can tweak how bright individual verts are by making the normal either shorter or longer than 1.0. You likely won't tweak like that and you can always go back to floats if you must.
Jon-Eric
I edited the answer to expand on the 'lighting trickery' comment and to add thoughts on texture coordinates.
Jon-Eric
+2  A: 

The OpenGL ES 1.1 specification says that Normals are automatically brought back to the [-1:1] or [0:1] range when using integer types. (See Table 2.7 of specification for the full list of formulaes)

(for shorts)   n_x = (2c +1)/(2^16 − 1)

So you don't need to rely on GL_NORMALIZE for normals (and can use whatever trick you want).

Texture coordinates however do not get scaled (values outside the [0:1] range are perfectly valid...). If you want to apply such a scaling, your best bet is to use a texture coordinate matrix, at a somewhat significant cost.

glMatrixMode(GL_TEXTURE);
glLoadMatrix(matrix_that_does_conversion_based_on_type);
Bahbar
Thanks for the answer. I may experiment with the texture matrix, although it does sound like that is likely to be slower than just sending GLfloats to the GPU.
Andrew Garrison
@Bahbar Can you add a link to the OpenGL ES specification you are using? Thanks!
Jon-Eric
@Jon-Eric: done.
Bahbar