tags:

views:

48

answers:

3

Hello, I am trying to write an optimized code that renders a 3D scene using OpenGL onto a sphere and then displays the unwrapped sphere on the screen ie producing a planar map of a purely reflective sphere. In math terms, I would like to produce a projection map where the x axis is the polar angle and y axis is the azimuth.

I am trying to do this by placing the camera at the center of the sphere probe and taking planar shots around so as to approximate spherical quads with planar tiles of the frustum. Then I can use this as texture to apply to a distorted planar patch.

Seems to me this is pretty tedious approach. I wonder if there is way to take this on using shaders or some GPU-smart method.

Thank you

S.

A: 

By the time you've bothered to build the model, take the planar shots, apply non-affine transformations and stitch the whole thing together, you've probably gained no performance and considerable complexity. Just project the planar image mathematically and be done with it.

msw
Well the problem is that I need to perform this operation for a large number of sphere probes. So doing a quasi manual rasterization is really not an option. I think the GPU way makes sense, I just want to patch the fact that linear transforms are not doable by the vanilla OpenGL pipeline.
Steve
@Steve: Are you computing spherical harmonics ?
Calvin1602
A: 

You seem to be asking for OpenGL's sphere mapping. NeHe has a tutorial on sphere mapping that might be useful.

Jerry Coffin
No, sphere mapping maps UV coordinates on a sphere. He wants to render a scene on a sphere.
Calvin1602
Quite right. It is the reverse I am looking for. Thank you tho
Steve
A: 

I can give you two solutions.

The first is to make a standard render-to-texture, but with a cubemap attached as the destination buffer. If your hardware is recent enough, it can be done in a single pass. This will deal with all the needed math in HW for you, but data repartition of cubemaps aren't ideal (quite a lot of distortion if the corners). In most cases, it should be enough though.

After this, you render a quad to the screen, and in a shader you map your UV coordinates to xyz vectors using staightforwad spherical mapping. The HW will compute for you which side of the cubemap to take, at which UV.

The second is more or less the same, but with a custom deformation and less HW support : dual paraboloids. Two paraboloids may not be enough, but you are free to slightly modify the equations and make 6 passes. The rendering pass is the same, but this time you're all by yourself to choose the right texture and compute the UVs.

Calvin1602
Thank you very much C. I think the render to texture idea seems very interesting and potentially extremely efficient. I will try it and check the error. If too large I will look into your second suggestion which seems more involved. Appreciated. S.
Steve
Btw, instead of paraboloïds, I would rather redirect you to "isocube" and "uniform cubemaps" in ShaderX 6. (It's still the same idea)
Calvin1602