views:

72

answers:

2

I have written code that generates a ray from the "eye" of the camera to the viewing plane some distance away from the camera's eye:

R3Ray ConstructRayThroughPixel(...)
{
  R3Point p;

  double increments_x = (lr.X() - ul.X())/(double)width;
  double increments_y = (ul.Y() - lr.Y())/(double)height;
  p.SetX( ul.X() + ((double)i_pos+0.5)*increments_x );
  p.SetY( lr.Y() + ((double)j_pos+0.5)*increments_y );
  p.SetZ( lr.Z() );

  R3Vector v = p-camera_pos;

  R3Ray new_ray(camera_pos,v);
  return new_ray;
}

ul is the upper left corner of the viewing plane and lr is the lower left corner of the viewing plane. They are defined as follows:

  R3Point org = scene->camera.eye + scene->camera.towards * radius;
  R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov);
  R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov);
  R3Point lr = org + dx - dy;
  R3Point ul = org - dx + dy;

Here, org is the center of the viewing plane with radius being the distance between the viewing plane and the camera eye, dx and dy are the displacements in the x and y directions from the center of the viewing plane.

The ConstructRayThroughPixel(...) function works perfectly for a camera whose eye is at (0,0,0). However, when the camera is at some different position, not all needed rays are produced for the image.

Any suggestions what could be going wrong? Maybe something wrong with my equations?

Thanks for the help.

A: 

Here's a quibble that may have nothing to do with you problem:

When you do this:

R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov);
R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov);

I assume that the right and up vectors are normalized, right? In that case you want sin not tan. Of course, if the fov angles are small it won't make much difference.

dmckee
I made the change and it didn't make much of a difference. What happens is that the rays are being cast but not the entire range of the rays. So, I'm assuming the mapping is not done in the correct way because multiple rays are being cast to the same pixel or the resulting value of the "cast" pixel is very close to the one next to it, so the entire range of pixels is not being fully reconstructed (i.e. there is no exactly one ray for pixel at viewing plane position (i,j))
Myx
That *sounds* like the kind of problem you get when popping back and forth from integer to floating point math, but I don't see any sign of that in the code you've posted. Have you tried stepping through the code for a simple case where you get the error (in the debugger or by hand). Sometime that is the easiest way to go, tedious or not.
dmckee
I was hoping that that would be my last resort. From the fact that it works on scenes whose camera is at (0,0,0) but doesn't work for scenes whose camera is at some other position, I thought that there was some error in my logic when computing the X and Y coordinates of the generated point on the viewing plane.
Myx
is there a way to directly interpolate between the upper left corner of the viewing plane and the lower left corner of the viewing plane given the pixel's (i,j) coordinate without separately computing the X component and Y component?
Myx
A: 

The reason why my code wasn't working was because I was treating x,y,z values separately. This is wrong, since the camera can be facing in any direction and thus if it was facing down the x-axis, the x coordinates would be the same, producing increments of 0 (which is incorrect). Instead, what should be done is an interpolation of corner points (where points have x,y,z coordinates). Please see answer in related post: http://stackoverflow.com/questions/2539088/3d-coordinate-of-2d-point-given-camera-and-view-plane

Myx