views:

59

answers:

1

Imagine having a 4x4 square, with 16 smaller squares inside of it, with associated data on what the squares should look like (ie., opacity, colour, etc...).

Is there an existing, efficient, algorithm, for converting this set of squares into an open-gl compatible triangle strip?

+1  A: 

I am not sure I understand correctly the geometry you try to render. If it is a kind of grid here is how I would do it:

Create and fill a Vertex Buffer Object with all your vertices:

8--9--a--b
| /| /| /|
|/ |/ |/ |
4--5--6--7
| /| /| /|
|/ |/ |/ |
0--1--2--3

Create and fill an Element Array Buffer with the indices used to render your quad grid:

{ 0,1,5,4, 1,2,6,5, 2,3,7,6, 4,5,9,8, 5,6,a,9, 6,7,b,a }

Setup everything using gl*Pointer, use glDrawElements with GL_QUADS to render that. The vertex cache will handle the already transformed vertices: every quad after the first one will only require to transform 2 vertices.

I don't think you will gain anything if you tri-strip or quad-strip it, except some memory on the Element Array Buffer.

If you want to strip it, create the corresponding Element Array Buffer, and call glDrawElements for each line. This can be done in one call using the Nvidia only extension GL_NV_primitive_restart

If it isn't a grid, you can give a try to NvTriStrip.

tibur
Thanks! This is what I ended up doing, and seems to have solved my problem.
blueberryfields
I would add that if you make a tristrip, you don't need separate draw calls for each row. You can make degenerate triangles which link the rows together into a single renderable strip.
codelark
Now I'm trying to do it with GL_TRIANGLES only, and it ain't quite working out. I'm using an algorithm that turns each square into two triangles, {0,4,1, 4,5,1}, but for reason it breaks down when I get to squares 16x16 or higher.
blueberryfields