views:

488

answers:

3

I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate.

Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this.

I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls?

Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option...

I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

+1  A: 

Just in case anyone else ends up here, dead-space elimination through a coarser sampling rate makes virtually no difference at all. Any remotely safe (ie: allowing a border for sampling artifacts) coarser sampling ends up grabbing most of the grid anyway in any remotely non-trivial field.

Speeding up the underlying field evaluation (with heavy memoisation) seemed to mostly solve the performance problems.

Dan Vinton
A: 

Try marching tetrahedra instead -- the math is simpler, allowing you to consider fewer cases per cell.

Crashworks
I don't think this is a good idea... the math is simpler, but you'll have to process many more tetrahedra than you would cubes for a given grid resolution.Here's a link to a survey paper with pointers to possible optimizations, among other things. It's a bit old (2006) but I don't think there's been all that much revolutionary research on it lately. http://graphics.ethz.ch/teaching/scivis_common/Literature/Newman06.pdf
More tetrahedra, but less computation for each one, less dependent ops, possibly more parallelizable. In truth I don't know; I mention it because we're planning an experiment in replacing a marching-cubes with a marching-tetras ourselves, and I'm curious if anyone else has tried and measured.
Crashworks