tags:

views:

464

answers:

1

Hello, I would like to render the 3D volume data: Density(can be mapped to Alpha channel), Temperature(can be mapped to RGB). Currently I am simulationg maximum intensity projection, eg: rendering the most dense/opaque pixel in the end.But this method looses the depth perception.

I would like to imitate the effect like a fire inside the smoke.

So my question is what is the techniques in OpenGL to generate images based on available data?

Any idea is welcome.

Thanks Arman.

+1  A: 

I would try a volume ray caster first.

You can google "Volume Visualization With Ray Casting" and that should give you most of what you need. NVidia has a great sample (using openg) of ray casting through a 3D texture.

On your specific implementation, you would just need to keep stepping through the volume accumlating the temperature until you reach the wanted density.

If your volume doesn't fit in video memory, you can do the ray casting in pieces and then do a composition step.

A quick description of ray casting:

CPU: 1) Render a six sided cube in world space as the drawing primitive make sure to use depth culling.

Vertex shader: 2) In the vertex shader store off the world position of the vertices (this will interpolate per fragmet)

Fragment shader: 3) Use the interpolated position minus the camera position to get the vector of traversal through the volume. 4) Use a while loop to step through the volume from the point on the cube through the other side. 3 ways to know when to end. A) at each step test if the point is still in the cube. B) do a ray intersection with cube and calculate the distance between the intersections. C) do a prerender of the cube with forward face culling and store the depths into a second texture map then just sampe at the screen pixel to get the distance.

5) accumulate while you loop and set the pixel color.

Jose
@Jose: Jose thank you for your response. Yes you are right that maybe the ray cating is the way to do. I did several attempts to make a ray casting following to Nvidias example, but I never achieved the Hi-dynamical resolution in ray casting method. The problem is more deep, in order to make a 3D texture I am interpolating the particle data onto grid, but that makes resolution loose. You can see the image rendered using particles by pmviewer http://www.aip.de/People/AKhalatyan/images/HALO198_2048.jpg . Is your algorithm possible to adopt for the textured particles?
Arman
Is your raw data actually a point cloud? Then You create a 3d texture out of it but the resolution of the texture is causing you to lose precision. Correct me if I'm wrong.
Jose
2 approaches come to mind but I don't know if they will have the performance that you need.Either way you need to partition your data (maybe octree) and cull on render.1) Nailboard each particle (depth based sprite).2) Use a modified ray tracer. Each point becomes a sphere and you launch a ray at each pixel of the screen. Instead of reflecting off the sphere you would pass through it until the density reaches the threshold, then return the temperature accumulation.
Jose