views:

296

answers:

4

Hello,

I'm making a software rasterizer for school, and I'm using an unusual rendering method instead of traditional matrix calculations. It's based on a pinhole camera. I have a few points in 3D space, and I convert them to 2D screen coordinates by taking the distance between it and the camera and normalizing it

Vec3 ray_to_camera = (a_Point - plane_pos).Normalize();

This gives me a directional vector towards the camera. I then turn that direction into a ray by placing the ray's origin on the camera and performing a ray-plane intersection with a plane slightly behind the camera.

Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);

float dot = ray_to_camera.GetDotProduct(m_Direction);
if (dot < 0)
{
   float time = (-m_ScreenDistance - plane_pos.GetDotProduct(m_Direction)) / dot;

   // if time is smaller than 0 the ray is either parallel to the plane or misses it
   if (time >= 0)
   {
      // retrieving the actual intersection point
      a_Point -= (m_Direction * ((a_Point - plane_pos).GetDotProduct(m_Direction)));

      // subtracting the plane origin from the intersection point 
      // puts the point at world origin (0, 0, 0)
      Vec3 sub = a_Point - plane_pos;

      // the axes are calculated by saying the directional vector of the camera
      // is the new z axis
      projected.x = sub.GetDotProduct(m_Axis[0]);
      projected.y = sub.GetDotProduct(m_Axis[1]);
   }
}

This works wonderful, but I'm wondering: can the algorithm be made any faster? Right now, for every triangle in the scene, I have to calculate three normals.

float length = 1 / sqrtf(GetSquaredLength());
x *= length;
y *= length;
z *= length;

Even with a fast reciprocal square root approximation (1 / sqrt(x)) that's going to be very demanding.

My questions are thus:
Is there a good way to approximate the three normals?
What is this rendering technique called?
Can the three vertex points be approximated using the normal of the centroid? ((v0 + v1 + v2) / 3)

Thanks in advance.

P.S. "You will build a fully functional software rasterizer in the next seven weeks with the help of an expert in this field. Begin." I ADORE my education. :)

EDIT:

Vec2 projected;

// the plane is behind the camera
Vec3 plane_pos = m_Position + (m_Direction * m_ScreenDistance);

float scale = m_ScreenDistance / (m_Position - plane_pos).GetSquaredLength();

// times -100 because of the squared length instead of the length
// (which would involve a squared root)
projected.x = a_Point.GetDotProduct(m_Axis[0]).x * scale * -100;
projected.y = a_Point.GetDotProduct(m_Axis[1]).y * scale * -100;

return projected;

This returns the correct results, however the model is now independent of the camera position. :(

It's a lot shorter and faster though!

+4  A: 

This is called a ray-tracer - a rather typical assignment for a first computer graphics course* - and you can find a lot of interesting implementation details on the classic Foley/Van Damm textbook (Computer Graphics Principes and Practice). I strongly suggest you buy/borrow this textbook and read it carefully.

*Just wait until you get started on reflections and refraction... Now the fun begins!

Kena
We did a raytracer last block. Don't you trace rays FROM the camera TO the world in a raytracer?
knight666
You're right, a typical raytracer goes pixel-by-pixel instead of by vertices. But I suspect the math and possible optimizations are very similar.
Kena
One possible optimization is to do four reciprocal square roots at once, using SSE, but I really want to look at algorithmic optimizations before trying that.
knight666
+3  A: 

Your code is a little unclear to me (plane_pos?), but it does seem that you could cut out some unnecessary calculation.

Instead of normalizing the ray (scaling it to length 1), why not scale it so that the z component is equal to the distance from the camera to the plane-- in fact, scale x and y by this factor, you don't need z.

float scale = distance_to_plane/z;
x *= scale;
y *= scale;

This will give the x and y coordinates on the plane, no sqrt(), no dot products.

Beta
negate scale for correct upside-down pinhole effect if desired
cobbal
also, if you are not always looking down the z axis, you can create a unit vector in the direction the camera is pointing and scale by the reciprocal of the dot product of that vector and the points. psuedocode: `scale = distance_to_plane / (camera_direction . (x, y, z))`
cobbal
when `camera_direction = (0, 0, 1)` you get the above code. This has the advantage of only needing 1 normalization for every time you change camera direction instead of one for every element rendered.
cobbal
plane_pos is simply the position of the plane. Because it's always behind the camera at a fixed distance, its position is O + tD, where O is the camera position, D is the inverse of the camera direction and t is a constant.
knight666
I've updated the code in the question. It works, except it's now independent of camera position.
knight666
A: 

Well, off the bat, you can calculate normals for every triangle when your program starts up. Then when you're actually running, you just have to access the normals. This sort of startup calculation to save costs later tends to happen a lot in graphics. This is why we have large loading screens in a lot of our video games!

Pace
+3  A: 

It is difficult to understand exactly what your code doing, because it seems to be performing a lot of redundant operations! However, if I understand what you say you're trying to do, you are:

  • finding the vector from the pinhole to the point
  • normalizing it
  • projecting backwards along the normalized vector to an "image plane" (behind the pinhole, natch!)
  • finding the vector to this point from a central point on the image plane
  • doing dot products on the result with "axis" vectors to find the x and y screen coordinates

If the above description represents your intentions, then the normalization should be redundant -- you shouldn't have to do it at all! If removing the normalization gives you bad results, you are probably doing something slightly different from your stated plan... in other words, it seems likely that you have confused yourself along with me, and that the normalization step is "fixing" it to the extent that it looks good enough in your test cases, even though it probably still isn't doing quite what you want it to.

The overall problem, I think, is that your code is massively overengineered: you are writing all your high-level vector algebra as code to be executed in the inner loop. The way to optimize this is to work out all your vector algebra on paper, find the simplest expression possible for your inner loop, and precompute all the necessary constants for this at camera setup time. The pinhole camera specs would only be the inputs to the camera setup routine.

Unfortunately, unless I miss my guess, this should reduce your pinhole camera to the traditional, boring old matrix calculations. (ray tracing does make it easy to do cool nonstandard camera stuff -- but what you describe should end up perfectly standard...)

comingstorm
Ha! You're right, the normalization was quite redundant. :P That fixes the problem that I had, but I'll definitely write it out on paper.
knight666