tags:

views:

19

answers:

2

I am rendering 3D objects on a 2D canvas by doing all necessary calculations in software. I am not using graphics acceleration.

Initially all the objects were cubes of same size, so I could sort them based on their distance in Z from camera and that would order them correctly. But now I am trying to draw cubes of varying dimensions. That makes my simple z-ordering algorithm fail in perspective projection.

I looked into computer graphics books and found the techniques used, they eventually recommend pixel based comparision of two polygons to determine which one is ahead of other. Probably that's what they do in graphics card. But doing so in software seems overly difficult and I guess it will be slow for practical use even if I can do it.

Is there a simple trick to do this in software? Any examples from early days of 3D graphics, when graphics cards were not available?

Although this is generic 3D graphics question, if it helps, I am doing this on top of HTML5 Canvas 2D API.

A: 

Right, today you use Z-buffering on the GPU to do a per-pixel depth comparison. You can do this in software too.

The sorting technique will not work in general, see wikipedia. You can improve it though by breaking your cubes into individual faces and sort them instead of cubes.

A more general technique that was used in many early games (e.g. Doom) is BSP trees. They won't work with dynamic scenes since it's expensive to create them. However they solve the ordering problem in general.

ybungalobill
+1  A: 

As @ybungalobill has already mentioned, z-buffer is the easiest algorithm to implement. When you are filling the triangles/polygons that make up your cubes, interpolate the Z coordinate between them and store it per-pixel. If you later fill another polygon that renders on the same X, Y coordinate, check if its Z is less than the Z already stored in the buffer. Do not forget to clear the Z buffer to infinity before repainting. Pseudocode:

foreach (scanline in polygon)  {
  int length = scanline.right.x - scanline.left.x;
  foreach (pixel in scanline)  {
    float t = (float)pixel.x / length;
    float z = (1 - t) * scanline.left.z + t * scanline.right.z;  // Interpolate the Z coordinate
    if (z < zbuffer[scanline.y][pixel.x])
      drawPixel(pixel.x, scanline.y, polygon.color);  // The pixel is closer, paint it
  }
}

A revised approach of Z buffer that performs better on CPU by not drawing pixels that would be overwritten is called segment buffer: http://www.gamedev.net/reference/articles/article668.asp

Another approach is the Warnock's algorithm. It uses recursion which makes it hard to use on GPUs but CPU should do fine if you use your own stack to avoid stack overflow. The idea lies in dividing the scene into 4 parts and checking if there's only one polygon covering the whole part. If not split it again until the condition is met (it will be met at the pixel level in worst case). Pseudocode:

void warnock(Rectangle rect)
{
  float minZ = infinity;
  foreach (polygon in polygons)  {
    if (rect is inside polygon)  {
      float z = interpolateZ(polygon, rect.x + rect.width / 2, rect.y + rect.height / 2);  // Get Z coordinate at the centre of the rectangle
      if (z < minZ)  {  // If there are more polygons in this rectangle, make sure the topmost one gets drawn last
        fillRect(polygon.color);
        minZ = z;
      }
    } else {
      // Divide to 4 subrectangles
      warnock(Rectangle(rect.x, rect.y, rect.width / 2, rect.height / 2));  // Top left
      warnock(Rectangle(rect.x, rect.y + rect.height / 2, rect.width / 2, rect.height / 2));  // Bottom left
      warnock(Rectangle(rect.x + rect.width / 2, rect.y, rect.width / 2, rect.height / 2));  // Bottom right
      warnock(Rectangle(rect.x + rect.width / 2, rect.y + rect.height / 2, rect.width / 2, rect.height / 2));  // Top right
    }
  }
}

The painter's algorithm is what you have done with your cubes, the only difference is that it sorts the polygons instead of whole objects. Even then it is difficult to solve various polygon intersections and I personally wouldn't use it for non-trivial scenes.

Another algorithm you might use is the backface culling algorithm. This works only for convex objects that do not overlap though. The algorithm calculates the normal of each polygon and if it points in the direction from the camera, it is removed.

Raycasting is another way to determine the visibility per-pixel. It is, however, quite CPU intensive. The basic idea is to check for each pixel of the screen, what polygon intersects it (what polygon gets hit by the ray casted from the current pixel). The origin of the rays is eye position. Pseudocode:

foreach (pixel in screen)  {
  float minZ = infinity;  // Can be zfar from the perspective projection
  Color pixelColor = backgroundColor;
  foreach (polygon in projectedPolygons)  {
    if (polygon contains Point(pixel.x, pixel.y))  {
      float z = interpolateZ(polygon, pixel.x, pixel.y);  // Get the current Z for (x, y) and this polygon using bilinear interpolation
      if (z < minZ)  {
        minZ = z;
        pixelColor = polygon.color;
      }
    }
  }
}
dark_charlie
Segment-buffering and Wornock algorithm look promising candidates. I will see which one I can implement. Thanks a bunch for such detailed answer. I am already using backface culling in orthographic projection, but I believe it doesn't work accurately with perspective projection.
Jayesh
@Jayesh: Backface culling works for perspective projection as well (if the other conditions mentioned in my answer are met).
dark_charlie
Aah. Your comment made me think and I think I know now why backface culling doesn't work for me in perspective projection. I use to the extreme extent, the fact that all my cubes have same orientation. So at any time only 3 of their faces are visible. In case of orthogaphic mode these are same 3 faces for all cubes. But it just hit me now that for perspective projection that's not true, and I should calculate hidden backfaces for each cube separately. I need to verify it, but I think that's it. Do you think that will make accurate z-sorting unnecessary?
Jayesh
It will do z-sorting unnecessary if the cubes do not overlap on the screen. If they do, you still need to sort the cubes. And in case the cubes intersect (in the scene), this algorithm won't solve it at all.
dark_charlie
I fixed backface culling for perspective mode too. Indeed, the culling algorithms differ in orthographic and perspective projections. For anyone interested, check this http://goo.gl/QqgM
Jayesh
What method did you use before? Calculating the dot product of the camera view vector and the normal and then checking its sign is a common approach and works for both orthographic and perspective projection.
dark_charlie