views:

204

answers:

3

I'm writing a 2D game using OpenGL. When I want to blit part of a texture as a sprite I use glTexCoord2f(u, v) to specify the UV co-ordinates, with u and v calculated like this:

GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)width_of_texture;
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)height_of_texture;

This works perfectly most of the time, except when I use glScale to zoom the game in or out. Then floating point rounding errors cause some pixels to be drawn one to the right of or one below the intended rectangle within the texture.

What can be done about this? At the moment I'm subtracting an 'epsilon' value from the right and bottom edges of the rectangle, and it seems to work but this seems like a horrible kludge. Are there any better solutions?

A: 

Your xpos/ypos must be based on 0 to (width or height) - 1 and then:

GLfloat u = (GLfloat)xpos_in_texture/(GLfloat)(width_of_texture - 1);
GLfloat v = (GLfloat)ypos_in_texture/(GLfloat)(height_of_texture - 1);
Jim Buck
This is wrong. Say width_of_texture=2 (with 1, it's even more fun), and you want to draw 2 quads that map exactly to the 2 texels. You need U=0,U=0.5 for quad 0 corners and U=0.5,U=1 for quad 1 corners. That's exactly what the formula Sirp provided does (if you feed it with the right xpos, i.e. 0, 1, 2), and not what yours does at all (yours only provides integers with width=2).
Bahbar
True, I was thinking he was specifying pixel positions, but if he in fact specifying between-pixel positions, then it would work. I'm used to working in pixel positions, which of course completely breaks down when the texture is as small as 2x2.
Jim Buck
A: 

Do the division as a double, round the result down yourself to the desired level of precision, then cast it to GLFloat.

Nathan
+2  A: 

Your issue is most likely not coming from rounding errors, but a misunderstanding on how OpenGL maps texels to pixels. If you notice off-by-one errors, it's probably because your UVs, your vertex positions or your projection matrix/viewport pair are not aligned to where they ought to be.

To simplify, I'll just talk about 1D, and be assuming you use a projection and a viewport that map X,Y coordinates to the equivalent pixel location (i.e. a glOrtho(0,width,0,height,zmin,zmax) and a glViewport(0,width,0,height).

Say you want to draw 5 texels (starting at 0 for simplicity) of your 64-wide texture showing on the 10 pixels (scale of 2) of your screen starting at pixel 20.

To get there, draw the triangle with X coordinates 20 and 30, and U (of the UV pair) of 10/64 and 15/64. The rasterization of OpenGL will generate 10 pixels to shade, with X coordinates 20.5, 21.5, ... 29.5. Note that the positions are not full integers. OpenGL rasterizes in the middle of the pixel.

Likewise, it will generate U coordinates of 10.25/64, 10.75/64, 11.25/64, 11.75/64 ... 14.25/64, 14.75/64. Note again that texel coordinates, brought back to texel positions in the texture space, are not full integers. OpenGL samples from the middle of texel locations, so this is fine.

How the samplers use these UVs to generate texel values depend on filtering modes, but be it nearest or linear, the pixels should be contained solely inside the texels of interest (0.25 with a size of 0.5 should only use color from 0 to 0.5, which is all inside the first texel).

In general, if you follow the general principles I laid out, you should never see artifacts.

  1. Use Ortho and Viewport of exactly your frame buffer size
  2. Use positions of X, X+width exactly
  3. Use UVs that correspond to exactly the texels you want (if you want the 10 texels starting from the texel 0, use U=0 to U=10.

If you ever have a -1 somewhere in your math, it's likely not correct (for position or UVs).

To get back to your example, it's unclear how you link the uvs you compute to positions (since you don't show the position computation). It's also unclear how you got xpos_in_texture (you should explain how you computed them for the corners of your sprite). My guess is that you computed that wrong.

Bahbar