views:

290

answers:

3

Hi,

My program uses PyOpenGL (so it's Python) with psyco.

I have around 21,000 line segments which I need to render in each frame of my render (unless the user zooms in, in which case line segments are culled and not sent to the card at all). This is currently taking around 1.5 seconds each frame to complete. That's just not good enough, so I'm looking at ways to reduce the number of distinct line segments.

I imagine there would be cases where multiple line segments can be merged into one big line, but I honestly do not even know where to begin with this. I do have the start point and end point of each line stored, so that might help things. Note that I am able to take as long as I need to at startup, and memory usage isn't too much of a concern.

Any ideas would be much appreciated.

A: 

20K segments isn't that much. Also, you'll be lucky when you can merge 10-100 lines per frame, so the speedup by this optimization will be neglectable. The rendering process is probably slow because you create the model again and again. Use glNewList() to save all the rendering commands in an GL render list on the card and then just issue glCallList() to render it with a single command.

Aaron Digulla
A: 

You can define an error metric for merging two line segments into one and then testing all pairs of segments and then merging them if the error is below a certain threshold.

One example is this algorithm:

  1. Construct a new line segment X from the two points farthest away from each other in the two line segments A and B.
  2. Find the minimum distance to X for all points in A and B.
  3. Assign the error as the maximum of those minimum distances.
  4. Replace A and B with X if the error is below your threshold.

This isn't the best algorithm, but it is easy to implement.

Edit 1

Definitely try doing display lists or vertex buffer object rendering before implementing this.

tkerwin
+4  A: 

It's almost certainly the overhead of all the immediate mode function calls that's killing your performance. I would do the following.

Don't use GL_LINE_STRIPS, use a single list of GL_LINES instead so they can be rendered in one go.

Use glDrawArrays instead of immediate mode rendering:

float* coordinates = {....}; //x and y coordinate pairs for all line segments
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 2 * sizeof(float), coordinates);
glDrawArrays(GL_LINES, 0, 2 * linecount);
glDisableClientState(GL_VERTEX_ARRAY);

(For even better performance you can store the vertex buffer in something called a vertex buffer object, but this should be fine to begin with)

One final thing, if you're do culling on a per line basis it's probably faster to just skip it and send all the lines to the GPU.

Andreas Brinck
Doing it with glDrawArrays as halved the render time for the set of lines, but it still takes 0.7 seconds for every frame. I'll have a look at VBOs and see if they can knock that number down further.
Matthew Iselin
I switched to using a VBO and saw an *instant* speed improvement: from 0.7 seconds a frame to microseconds. Thanks! :)
Matthew Iselin