views:

771

answers:

3

I'm looking for a good algorithm that can give me the unique edges from a set of polygon data. In this case, the polygons are defined by two arrays. One array is the number of points per polygon, and the other array is a list of vertex indices.

I have a version that is working, but performance gets slow when reaching over 500,000 polys. My version walks over each face and adds each edge's sorted vertices to an stl::set. My data set will be primarily triangle and quad polys, and most edges will be shared.

Is there a smarter algorithm for this?

+2  A: 

Just to clarify, you want, for a polygon list like this:

A +-----+ B
   \    |\
    \ 1 | \
     \  |  \
      \ | 2 \
       \|    \
      C +-----+ D

Then instead of edges like this:

A - B -+
B - C  +- first polygon
C - A -+

B - D -+
D - C  +- second polygon
C - B -+

then you want to remove the duplicate B - C vs. C - B edge and share it?

What kind of performance problem are you seeing with your algorithm? I'd say a set that has a reasonable hash implementation should perform pretty ok. On the other hand, if your hash is not optimal for the data, you'll have lots of collisions which might affect performance badly.

Lasse V. Karlsen
+2  A: 

Yes
Use a double hash map.
Every edge has two indexes A,B. lets say that A > B.
The first, top level hash-map maps A to another hash-map which is in turn maps B to some value which represents the information you want about every edge. (or just a bool if you don't need to keep information for edges).
Essentially this creates a two level tree composed of hash maps.

To look up an edge in this structure you take the larger index, look it up in the top level and end up with a hash map. then take the smaller index and look it up in this second hash map.

shoosh
+1  A: 

You are both correct. Using a good hashset has gotten the performance well beyond required levels. I ended up rolling my own little hash set.

The total number of edges will be between N/2 and N. N being the number of unique vertices in the mesh. All shared edges will be N/2, and all unique edges will be N. From there I allocate a buffer of uint64's and pack my indices into these values. Using a small set of unique tables I can find the unique edges fast!

Peter Shinners