views:

178

answers:

2

Ok, ive been developing a software rasterizer for some time now, but have no idea how to go about benchmarking it to see if its actually any good.... i mean say you can render X amount of verts ant Y frames per second, what would be a good way to analyse this data to see if its any good? rather than someone just saying "30 fps with 1 light is good" etc?

+1  A: 

If you want to determine if it's "any good" you will need to compare your rasterizer with other rasterizers. "30 fps with 1 light" might be extremely good, if no-one else has ever managed to go beyond, say, 10 fps.

Svein Bringsli
Thats a fair point.... any more "scientific" ways though?
Stowelly
+3  A: 

What do you want to measure? I suggest fillrate and triangle rate. Basically fillrate is how many pixels your rasterizer can spit out each second, Triangle rate is how many triangles your rasterizer + affine transformation functions can push out each second, independent of the fillrate. Here's my suggestion for measuring both:

To measure the fillrate without getting noise from the time used for the triangle setup, use only two triangles, which forms a quad. Start with a small size, and then increase it with a small interval. You should eventually find an optimal size with respect to the render time of one second. If you don't, you can perform blending, with full-screen triangle pairs, which is a pretty slow operation, and which only burns fillrate. The fillrate becomes width x height of your rendered triangle. For example, 4 megapixels / second.

To measure the triangle rate, do the same thing; only for triangles this time. Start with two tiny triangles, and increase the number of triangles until the rendering time reaches one second. The time used by the triangle/transformation setup is much more apparent in small triangles than the time used to fill it. The units is triangle count/second.

Also, the overall time used to render a frame might be comparable too. The render time for a frame is the derivative of the global time, i.e delta time. The reciprocal of the delta time is the number of frames per second, if that delta time was constant for all frames.

Of course, for these numbers to be half-way comparable across rasterizers, you have to use the same techniques and features. Comparing numbers from a rasterizer which uses per-pixel lighting against another which uses flat-shading doesn't make much sense. Resolution and color depth should also be equal.

As for optimization, getting a proper profiler should do the trick. GCC has the GNU profiler gprof. If you want an opinion on clever things to optimize in a rasterizer, ask that as a seperate question. I'll answer to the best of my ability.

Mads Elvheim