views:

203

answers:

1

I am rendering textured quads from an orthographic perspective and would like to simulate 'depth' by modifying UVs and the vertex positions of the quads four points (top left, top right, bottom left, bottom right).

I've found if I make the top left and bottom right corners y position be the same I don't get a linear 'skew' but rather a warped one where the texture covering the top triangle (which makes up the quad) seems to get squashed while the bottom triangles texture looks normal.

I can change UVs, any of the four points on the quad (but only in 2D space, it's orthographic projection anyway so 3D space won't matter much). So basically I'm trying to simulate perspective on a two dimensional quad in orthographic projection, any ideas? Is it even mathematically possible/feasible?

ideally what I'd like is a situation where I can set an x/y rotation as well as a virtual z 'position' (which simulates z depth) through a function and see it internally calclate the positions/uvs to create the 3D effect. It seems like this should all be mathematical where a set of 2D transforms can be applied to each corner of the quad to simulate depth, I just don't know how to make it happen. I'd guess it requires trigonometry or something, I'm trying to crunch the math but not making much progress.

here's what I mean:

alt text

Top left is just the card, center is the card with a y rotation of X degrees and right most is a card with an x and y rotation of different degrees.

+3  A: 

To compute the 2D coordinates of the corners, just choose the coordinates in 3D and apply the 3D perspective equations :

Original card corner (x,y,z)

Apply a rotation ( by matrix multiplication ) you get ( x',y',z')

Apply a perspective projection ( choose some camera origin, direction and field of view ) For the most simple case it's :

  • x'' = x' / z
  • y'' = y' / z

The bigger problem now is the texturing used to get the texture coordinates from pixel coordinates :

The correct way for you is to use an homographic transformation of the form :

  • U(x,y) = ( ax + cy + e ) / (gx + hy + 1)
  • V(x,y) = ( bx + dy + f ) / (gx + hy + 1)

Which is fact is the result of the perpective equations applied to a plane.

a,b,c,d,e,f,g,h are computed so that ( with U,V in [0..1] ) :

  • U(top'',left'') = (0,0)
  • U(top'',right'') = (0,1)
  • U(bottom'',left'') = (1,0)
  • U(bottom'',right'') = (1,1)

But your 2D rendering framework probably uses instead a bilinear interpolation :

  • U( x , y ) = a + b * x + c * y + d * ( x * y )
  • V( x , y ) = e + f * x + g * y + h * ( x * y )

In that case you get a bad looking result.

And it is even worse if the renderer splits the quad in two triangles !

So I see only two options :

  • use a 3D renderer
  • compute the texturing yourself if you only need a few images and not a realtime animation.
fa.
In simple terms, the problem is that you can't just calculate the positions of the corners, you have to calculate the position of each pixel in the texture separately. That's because farther textures look smaller, but if you just compute the corner positions and assume the textures are stretched equally between them, the far away parts of the triangle will have textures too large and the close parts will have textures too small.
tloflin