tags:

views:

107

answers:

1

I'm looking at shadow mapping in OpenGL.

I see code like:

// This is matrix transform every coordinate x,y,z
// x = x* 0.5 + 0.5 
// y = y* 0.5 + 0.5 
// z = z* 0.5 + 0.5 
// Moving from unit cube [-1,1] to [0,1]  
const GLdouble bias[16] = {
  0.5, 0.0, 0.0, 0.0,
  0.0, 0.5, 0.0, 0.0,
  0.0, 0.0, 0.5, 0.0,
  0.5, 0.5, 0.5, 1.0};

// Grab modelview and transformation matrices
glGetDoublev(GL_MODELVIEW_MATRIX, modelView);
glGetDoublev(GL_PROJECTION_MATRIX, projection);


glMatrixMode(GL_TEXTURE);
glActiveTextureARB(GL_TEXTURE7);

glLoadIdentity();
glLoadMatrixd(bias);

// concatating all matrice into one.
glMultMatrixd (projection);
glMultMatrixd (modelView);

// Go back to normal matrix mode
glMatrixMode(GL_MODELVIEW);

Now, if I rip out the bias matrix. The code does not work. Searching other shadow mapping code, I see the same bias matrix, without any explaination. Why do I want this bias to map x, y, z to 0.5 * x + 0.5, 0.5 * y + y, ... ?

Thanks!

+2  A: 

when you transform vertices inside the frustum with a standard modelview/projection matrix, the result you get is a vertex that, once w-divide is done, is in the [-1:1]x[-1:1]x[-1:1] cube. you want your texture coordinates to be in the [0:1]x[0:1] range, hence the remapping for x and y. It's the same kind of thing for Z, assuming your DepthRange is [0:1], which is the default.

Bahbar