What is the best mechanism for handling large scale structures and scenes?
Examples being a continent with scale cities and geography, or infinity universe style planetary transitions.
What is the best mechanism for handling large scale structures and scenes?
Examples being a continent with scale cities and geography, or infinity universe style planetary transitions.
Taken from http://www.gamedev.net/reference/business/features/spotlightFB/
Q:Space. It’s big. It’s REALLY big. It must be daunting to code in such huge scales – how do you go about such a thing? Do you use any special data structures or measurement units to help?
A: As you can imagine, working with a single type of unit doesn’t work. I use a hierarchical system of units. At the galactic level, light-year (LY) units are utilized. At the star system level, the kilometer is the base unit, and coordinates are represented by double-precision floating point numbers. At render time, vertices are generated as single-precision floats, but translated into camera space to minimize the loss of precision due to large numbers.
The biggest problem with working with floating point representations of positions in large scale is that you rapidly lose precision as you get further and further away from the origin.
To remedy this you need to express all positions as relative to something else than the origin. The easiest way to do this is to partition the world into a grid and store the positions of all entities something like this:
struct Position {
int kilometers[3]; // x, y and z offset in kilometers
float offset[3]; //x, y and z offset in meters
};
The position of the camera is also stored like this, and when it's time to render you do something like this:
void
getRelativePosition(float& x, float& y, float& z, const Position& origin, const Position& object) {
x = (object.kilometers[0] - origin.kilometers[0]) * 1000.0f +
(object.offset[0] - origin.offset[0]);
//Ditto for y and z
}
//Somewhere later
float x, y, z;
getRelativePosition(x, y, z, camera.position(), object.position());
renderMesh(x, y, z, object.mesh());
(For simplicity I ignored the orientation of camera and objects in this example, since there are no special problems associated with this).
If you're working with a continuous world on a galactic scale you can replace the kilometers
parameter with a long long
(64 bits) giving you an effective range of 1.8 million lightyears.
EDIT: To use this for continuous geometry such as terrain etc, you have to split the terrain into chunks of size one square kilometers, the coordinates of the vertices in the terrain chunk should be in the range [0, 1000].
Also in the function getRelativePosition
above you could change it so it returns a bool
and return false
if the difference in kilometers is larger than some threshold (say the distance to your far clip plane).
Might be worth looking into the tech behind Bing Maps Deep Zoom
Similar tech is used by Google Earth, which lets you go form Planet view all the way down to street view pretty smoothly. There is obviously a lot of resolution swapping going on as you zoom in and out.