I'm writing a ray tracer (using left-handed coordinates, if that makes a difference). It's for the sake of teaching myself the principles, so I'm not using OpenGL or complex features like depth of field (yet). My camera can have an arbitrary position and orientation; I indicate them by way of three vectors, location, look_at, and sky, which behave like the equivalent POV-Ray vectors. Its "film" also has a width and height. (The focal length is implied by the distance from position to look_at.)
My problem is that don't know how to cast the rays. I have two quantities, vx and vy, that indicate where the ray should end up. They both vary from -1 to 1. If they're both -1, I'm casting the ray from the camera's position to the top-left corner of the "film"; if they're both 1, the bottom-right; if they're both 0, the center; and the rest is apparent.
I'm not familiar enough with vector arithmetic to derive an equation for the ray. I would appreciate an explanation of how to do so.