Organic Vectory

  • Increase font size
  • Default font size
  • Decrease font size

Planetary terrain rendering - part one

This document was originally posted on December 3, 2005 as a "development journal". The current version of ovPlanet is substantially different from the description below and this information is only provided for educational en general interest purposes.

Previous | Next (part two)


Although our first terrain engine was very good at what it did, it was also designed with a specific game-concept in mind. In that game concept the terrain covered by the game was limited. So a limited-area terrain-engine would be all we need. Great care was taken to add features we really needed and which we found in no other engine at that time (which was around 1999): multiple terrain layers (for underground areas), smooth LOD vertex fading, texture blending, static and dynamic LOD. This engine was ready in 2000 and was heavily optimized which made it really fast as well.

Unfortunately, the game it was designed for never materialized and we ended up using the engine for various ad-hoc projects such as technology demo's. Then we figured out 2 major problems with the engine: first, it was so heavily optimized that is was extremely change-resistant; second, it was grid-based.

When we tried to modify the existing engine we quickly ran into problems. For purists like us, that meant just one thing...

The birth of ovPlanet

In the fall of 2004 we were discussing terrain engines and what it would take to develop the "ultimate" terrain engine. The requirements we summed up pretty quickly:

  • All "real" terrain we know of exists on planetary bodies. So it should be sphere-based.
  • The LOD system must work at run-time without precalculations. This will allow run-time modifications to the terrain.
  • A tesselation system should be used that does not have polar artifacts like the commonplace longitude/lattitude approach.
  • The LOD system must be able to work with a set of criteria. These criteria were:
    • a fixed amount of available time (stop generating data when the time limit expires)
    • a maximum amount of generated triangles (stop generating data when the data set reaches a certain size)
    • a minimum desired visual quality (stop generating data when new detail has a smaller impact than the desired visual quality).

After a few design sessions and some research, we actually started working on ovPlanet in the Fall of 2004. The project has been dormant from January until August 2005, at which point we finally continued with it. Currently, our triangulator has reached a mature state and work has begun on our height-data management code.

Basic Sphere #1:

This picture shows our planet in its most basic form: the icosahedron. We opted for this solid as our most basic form because of the following reasons:

  • It is composed of equilateral triangles (i.e. each triangle is identical)
  • Although is has more "irregular" vertices than a cube (12 instead of 8), these vertices join 5 faces instead of 3 which the cubed approach has, so they integrate into the entire surface better.

1. Icosahedron (aka 20-faced regular solid)

1. Icosahedron (aka 20-faced regular solid)

Adding detail:

The process of adding detail must be as atomic as possible, so the algorithm can quit the sub-division process at any time. Luckily, this process is very simple. We just spit one triangle into 4 identical smaller ones.

Single Subdivision Step
First sub-division step

Even more detail:

If we continue this process until we reach a (potentially) visible set of 5000 triangles, we get a very good approximation of a sphere. Note that the detail exists mostly at the edges of the sphere. This is because the sub-division algorithm always tries to pick a splitting triangle whose split will have the highest visual impact in screen space. This way, there are no triangles "expended" in areas with low visual impact. Since this algorithm works in screen space, detail is added where it is most important, regardless of the distance to the viewer.

In the case of self-shadowing terrain that is lit by a "solar" body, this technique could result in lousy shadows if applied without taking the light-source into account. The solution to that is to simply determine the visual impact from the light-source's point of view as well and use both impact values to determine the importance of a splitting candidate triangle.

5000 triangle sphere
2. 5000-triangle sphere

Culling Part 1 - The Horizon:

Everybody who has played any recent FPS games knows of the "ever-existing fog" or "circular mountain range" that is used to keep the viewing distance to an acceptable level. Since most grid-based terrain engines have pre-set limits on their LOD levels they get into trouble with viewing distances that get too large.
They also have the problem of keeping their dataset in memory entirely. Most flight simulators have already solved this problem but if you ever tried taxying your Learjet onto the freeway and into a nearby city, you were greeted with ugly ground-level graphics. In general: ground-based games do not scale up well to huge areas or viewing distances, while aerial games do not scale down well to ground level.
We believe our design does away with these limitations and allows a truly "fluent" transition from space all the way down to any desired detail level. Without limitations to view distances or size of data-sets (except those imposed by hardware constraints, of course).

Back to our horizon now. The first culling step is to determine where to take a "slice" out of the sphere so that any geographic feature behind the horizon and big enough to be seen is still within the potential data set. The determination of this "horizon culling plane" is not as hard as it seems, since the problem degenerates to a simple 2-dimensional algorithm.

The following picture shows the same sphere as above, now viewed from the side, with the horizon culling plane rendered in transparent green.

Horizon Culling In Action
3. Horizon culling in action

Culling Part 2 - The View Frustum:

Besides culling against the horizon, culling is performed against the viewing frustum as well. With the exception that the far plane is never included in these checks. This is because we want an unlimited viewing distance in our engine. The following pictures show various screenshots and their effect on the dataset actually generated.

Looking Down
4. Looking directly at the surface at a lower altitude
From The Inside
5. Similar as 4, now viewed from inside the planet
Looking At The Horizon
6. Looking towards the horizon at low altitude
External Top View
7. Same as 6, now viewed from 'above'
External Side View
8. Same as 6, now viewed from the side
External Side View With Culling Plane
9. Same as 6, now viewed from the side with culling plane

Culling Part 3 - The future: Occlusion Culling

As shown in picture 6, quite a large amount of data in a "flat" sphere goes to triangles that actually cannot be seen because they are occluded by the planet itself. The horizon culling algorithm does not take occlusion into account yet, but that's a future expansion we might add.

Data and Caching

The Planet engine is currently split into two parts: the first part, which you can see in action in these screenshots is the "triangulator". It generates data based on the criteria mentioned earlier. This data is organized in a hierarchic structure for which we implemented a caching scheme.
When the camera does not move very long distances per frame, the caching scheme can re-use most of its data. Tests have shown that caching improves the triangle yield by as almost 100%, meaning the system can generate twice the amount of triangle data with caching than without it in the same fixed amount of time.
Height-data comes from a "topography" object which we are currently building together with its editor. Topography will be responsible for looking up the correct height value when planet requests it. Details will follow when available.

Some "Real-World" Figures

To see if this engine and system could really handle a planet, and what it would take in terms of sub-division passes, we made a few calculations based on our own planet Earth. The results might interest you:

  • The Earth's mean radius is a little under 6400 km
  • The Earth's surface area is a little over 510 trillion square meters.
  • Since vertices "look up" elevation data in our model, we are mostly interested in the "area per vertex".
  • The "area per vertex" drops to 0.725 square meters after 23 sub-division steps from the "root" icosahedron.
  • If you want to get down to the square millimeter you need perform 34 sub-division steps...
  • Without taking elevation into account, a triangle budget of 15000 triangles and a subdivison cap at 23 levels would deliver 1 meter resolution terrain up to 40 meters from the viewer (if the viewer were looking at the horizon).

Data, data and even more data

The bottom line of this engine is: if you have the (elevation) data available, it can incorporate it in the system. We are currently working on our elevation data management code, which will load in terrain data on demand and discard them again when not in use. This is necessary because of the sheer size of real-life planetary elevation data.

This elevation data is stored in a hierarchic structure just like the generated vertex data. Where necessary, small terrain generation code can be inserted to generate "fidelity" for regions which don't have high-resolution elevation data available. But these are all future todo-items at the moment. Our first goal now is to get the elevation system up and running, together with its companion editor.

Previous | Next (part two)