views:

870

answers:

4

I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?

I really don't want to reinvent the bicycle. Please, help...

Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.

This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.

UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.

+2  A: 

The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.

An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".

In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.

re your update, about cross-platform: OpenGL has cube maps as a native texture type, which you can optionally mipmap.
Yes, thanks. But I want to have this on mobile platforms also...
avp
OIC -- you want to to do it *all* by hand! In that case, either mapping will do. In my experience, it's the filtering that's the hardest to implement, and you can make the mapping largely independent of that.
Well... yes and no. I can do algorithm significantly faster if I mix it all together. I can simplify math and so on.
avp
The sphere should work out nicely that way -- you can get (X,Y,Z) coordinates easily, rotate them with a 3x3 matrix, then test and normalize the result to select and address your texture maps. The parabolic mapping should be simpler in that respect. The filtering is still hard, though...
...for antialiasing, don't think of the map from texture to sphere -- think of the map from texture to screen. You can't keep the edge of the sphere from being distorted, so mipmaps are not enough by themselves to antialias well. Hardware uses anisotropic filtering...
...which is a pain to implement by hand, not to mention slow. Actually, you might consider a radical representation shift, like representing your map as a point cloud, and transforming the points *forward* instead of mapping pixels backwards onto a set of textures. Good luck!
Hehe, it's really funny idea with point cloud :) But it makes it really not *forward*, but backward, isn't it? I mean, just because we will then be able to run through this "cloud" and so we will be able to have a back traced coordinates of these points.
avp
+2  A: 

There is a nice new way to do this: HEALPix.

Advantages over any other mapping:

  1. The bitmap can be divided into equal parts (very little distortion)
  2. Very simple, recursive geometry of the sphere with arbitrary precision.

Example image.

Aaron Digulla
Thanks. I was very surprised and pleased by this simple trick, too.
Aaron Digulla
+1  A: 

I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...

The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.

If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".

For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.

About Starcon2, here you can recall it: http://sc2.sourceforge.net/screenshots/slaveshield.php http://sc2.sourceforge.net/screenshots/gasgiant.php http://sc2.sourceforge.net/screenshots/titan_pcmenu.php
avp
And about the magnification, we can use mipmaps when shrinking and lerp for zooming.
avp
+1  A: 

Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.

rotoglup