views:

93

answers:

2

Hello everyone,

I'm currently working with OpenGL ES 1.1 and using the DrawElements convention along with Vertex, Normal, Texture Coordinate, and Index arrays.

I recently came across this while researching the idea of using Normal/Bump mapping which I previously though impossible with OpenGL ES: http://iphone-3d-programming.labs.oreilly.com/ch08.html

I can generate an object-apace normal map already from my 3D modeler, but what I'm not completely clear on is whether or not the Normal coordinates array will be necessary any longer if implementing a 2nd texture unit for normal mapping, or will Lighting + Color Texture Combined with a Normal map via the DOT3_RGB option be all that is required?

EDIT - After researching DOT3 Lighting a bit further, I'm not sure if the answer given by ognian is correct. This page, http://www.3dkingdoms.com/tutorial.htm gives an example of it's usage and if you look at the "Rendering & Final Result" section bit of code, there is no normal array ClientState for Normal Arrays is never enabled.

I also found this post here, http://stackoverflow.com/questions/1894351/what-is-dot3-lighting which explains it well... but leads me to another question. In the comments, it's stated that instead of translation of normals, you translate light direction. I'm confused about this as if I have a game with a stationary wall... why would I move the light around just for one model? Hoping someone can give a good explanation of all of this...

A: 

Hi,

You still need to provide the per-vertex normals, to properly set up the per-pixel normal map.

ognian
A: 

Whereas tangent-space normal maps perturb the normals that are interpolated from the per-vertex normals, object-space normal maps already contain all needed information about surface orientation in the map. Therefore, if you’re just doing DOT3 lighting in OpenGL ES 1.1, you don’t need to pass the normals again.

The reason the other post mentioned translating light direction rather than the normals is because both arguments to the dot product (the per-pixel normal and the light vector) need to be in the same coordinate space for the dot product to make any sense. Because you have an object-space normal map, your per-pixel normal will always be in your object’s local coordinate space, and the texture environment doesn’t provide any means of applying further transformations. Chances are that your light vectors are in some other space, so the transformation that was mentioned is there to convert from the other space back to your object’s local space.

Pivot
By light vector, are we just talking about the array light0position in glLightfv(GL_LIGHT0, GL_POSITION, light0position)? If I manually transform it to correct the lighting on one object... am I not completely fouling up how the light is hitting everything else? How do to transform it and then apply it to just the change in color to just the object in question? Thanks for the solid answer, Pivot, by the way.
Maximus
If you’re doing object-space normal mapping as in the “Normal Mapping with OpenGL ES 1.1” section of the first link you posted, you’re not using regular OpenGL ES 1.1 lighting at all. At this point, you want to pass your light vector (in object space) as your vertex’s color so you can use it for the DOT3. You can often get a decent enough approximation using the same color for all vertices (the object-space light vector to the center of your object).
Pivot
Okay, things are kind of starting to make more sense... just barely. The whole thing that sparked this, was that I was using regular lighting and believed normal mapping something that couldn't be done with OpenGL ES. When I found out that it was possible, I've been trying to figure out how it works. So... before I go any further... is it a good idea? Is is possibly faster than regular OpenGL ES 1.1 lighting?
Maximus
Whether or not DOT3 lighting is faster probably depends on what hardware you’re running on, but what it does tend to do is reduce load on the vertex processing side in exchange for a bit more fragment processing work. It can certainly look a lot better than per-vertex lighting if your model was originally built with more detail than can be represented with your polygon budget, provided you can deal with the limitations (e.g. diffuse-only lighting).
Pivot
Thanks for all of your help, Pivot. By any chance do you know of any full example that uses 2 textures (color and normal map)? I'm very close to finally understanding this, but I'm still missing something... I understand that you have to use the primary glColor and the light vector to determine how the illumination at that point, but then how that's used to blend with the color from the colormap and normalmap... that's where I'm losing it.
Maximus
Try http://diaryofagraphicsprogrammer.blogspot.com/2008/12/ip-programming-tip-5.html. Vertex color is the object-space light direction remapped from the range [-1, 1] to [0, 1]. Texture unit 0 samples your normal map and computes per-pixel diffuse lighting (N⋅L). Texture unit 1 samples your color map and modulates it by the diffuse light intensity calculated by texture unit 0.
Pivot