views:

151

answers:

2

Hello,

as a homework assignment, we're writing a software rasterizer. I've noticed my z buffering is not working as well as it should, so I'm trying to debug it by outputting it to the screen. (Black is near, white is far away).

However, I'm getting peculiar values for the z per vertex. This is what I use to transform the points:

float Camera::GetZToPoint(Vec3 a_Point)
{
    Vec3 camera_new = (m_MatRotation * a_Point) - m_Position;

    return (HALFSCREEN / tanf(_RadToDeg(60.f * 0.5f)) / camera_new.z);
}

m_MatRotation is a 3x3 matrix. Multiplying it by a vector returns a transformed vector.

I get maximum and minimum values between 0 and x, where x is a seemingly random number.

Am I doing this transformation right? If so, how can I normalize my Z values so it lies between two set points?

Thanks in advance.

+4  A: 

To normalize the Z values you have to define a near clipping plane and a far clipping plane. Then you normalize Z such that its 0 at the near plane and 1 at the far plane.

However, you would usually do that after projection. It looks like your last line is where projection occurs.

A number of other things:

  • You compute the full matrix-vector multiplication but keep only the Z, this is wasteful. You should consider transforming the points and keeping all their X, Y, Z coordinates ;
  • You recompute tanf() at every vertex, but its constant ;
  • I would suggest you use a projection matrix rather than the tanf computation ;
  • Start with a simple orthogonal projection, it will be easier to debug.
Philippe Beaudoin
+1  A: 

Assuming you want to know z at a vertez, which would be a_Point:

First of all, you want to perform the translation before the rotation so you perform the rotation around your camera and not the origin of your space, wchich may be somewhere else. Second, camera_new is not very well chosen a name, as it represents the coordinates of a_Point in the new referential set by the position of your camera. Instead, do the following:

Vec3 point_new = (m_MatRotation * (a_Point-m_Position));

If that does not work, you'll have to do it the hard way by creating a real projection matrix that performs the translation, rotation and projection all in one multiplication. Here are some tutorials that helped me a lot understanding how to do that.

http://www.songho.ca/opengl/gl_projectionmatrix.html

codeguru.com/cpp/misc/misc/math/article.php/c10123/

Once you have managed to project vertices on the screen in a perspective-correct way, you'll have to find a way to fill the space between them, and find, for each pixel filled, what is z. This is a whole new story and the Wikipedia article about Texture mapping helped me do it.

en.wikipedia.org/wiki/Texture_mapping#Perspective_correctness

Sorry I couldn't give you more links, Stackoverflow would not let me because I'm a new user...

Gabriel