Hi,
My problem:
How can I take two 3D points and lock them to a single axis? For instance, so that both their z-axes are 0.
What I'm trying to do:
I have a set of 3D coordinates in a scene, representing a a box with a pyramid on it. I also have a camera, represented by another 3D coordinate. I subtract the camera coordinate from the scene coordinate and normalize it, returning a vector that points to the camera. I then do ray-plane intersection with a plane that is behind the camera point.
O + tD
Where O (origin) is the camera position, D is the direction from the scene point to the camera and t is time it takes for the ray to intersect the plane from the camera point.
If that doesn't make sense, here's a crude drawing:
I've searched far and wide, and as far as I can tell, this is called using a "pinhole camera".
The problem is not my camera rotation, I've eliminated that. The trouble is in translating the intersection point to barycentric (uv) coordinates.
The translation on the x-axis looks like this:
uaxis.x = -a_PlaneNormal.y;
uaxis.y = a_PlaneNormal.x;
uaxis.z = a_PlaneNormal.z;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
While the translation on the z-axis looks like this:
uaxis.x = -a_PlaneNormal.z;
uaxis.y = a_PlaneNormal.y;
uaxis.z = a_PlaneNormal.x;
point vaxis = uaxis.CopyCrossProduct(a_PlaneNormal);
point2d.x = intersection.DotProduct(uaxis);
point2d.y = intersection.DotProduct(vaxis);
return point2d;
My question is: how can I turn a ray plane intersection point to barycentric coordinates on both the x and the z axis?