views:

87

answers:

4

As I was reading RedBook I stayed quite confused, that openGL can have maximum 8 lights in scene (number depending on implementation, but should be arround 8).

But I can imagine number of situations that would need more lights, so I think there's a trick arround this in game dev.

For example, you have very long street with 50 strretlights, or you can have squad of 20 peoples all using flashlights. How you actually simulate those situations? There's a problem, that light iluminates only the part of mesh, not whole cone between source and object, so if we don't have 100% clean air, there must be some sort of simulation also. What's the way this is done, and the game runs smooth?(I read also that enabling all 8 lights could kill FPS)

Thanks

+3  A: 

One of the tricks that will be used in games is to simulate the light with a texture.

So in your street light example, the "lit" areas will actually be a brighter texture image. Only the nearest lights will be light sources to get the correct effects.

There are similar approaches where semi-transparent textures or textures with a transparent cone are overlaid the scene to give the same effect.

Don't forget that to compute the shadows etc. in real-time means that the scene has to be rendered from the point of view of the light to calculate the intensity of the light at any given location. So for 8 lights you are rendering the scene (or parts of the scene) up to 8 times before actually rendering the scene for display. Even if this is done on the GPU rather than the CPU it's very expensive.

ChrisF
A: 

Those games use the graphic card to do the rendering, I mean the rendering calculations are all done in the GPU.

I guess the redbook is talking about the exercises where we probably do the rendering using the CPU, not graphics card.

Halo
No, the limit is real.
Matias Valdenegro
+4  A: 

8 lights is the limitation of fixed GL pipeline, where you enable each of them, set mode, parameters, etc. Now you have pixel shaders, and lighting is done within shader. There you can use large number of dynamic (not backed into textures) lights. You only need to supply all these lights' parameters sufficiently (maybe, in a texture), and test how many lights your shader is able to process. Also, in shader you can cull too weak lights (contributing too little into pixel value) or just too distant ones.

Update: complex shader with branching can even generate lights (think of long street or christmas tree). It could be more efficient than supply large number of parameters.

alxx
alxx
Weak lights are not a shaders feature, since the ability to enable/disable lights could be done outside the fixed pipeline designing a light manager, enabling only important lights.
Luca
Light manager enables/disables lights for whole scene. Pixel shader could do it per pixel, see the difference? By the way, 8 lights in traditional pipeline are per-vertex, while shader does per-pixel lighting - quite a different thing.
alxx
I see the difference, but from your answer seems that shader do it all (by pixels), but vertex lighting has a nice look, and it's still valid in current world, even with shaders.
Luca
Once you compare per-vertex and per-pixel shading, you hardly return to the former one... Think of bump-mapping, parallax mapping, reflections and other effects.
alxx
A: 

Lighting is a very complex topic in computer graphics.

What really matter is, of course, the object illumination, emulating the real world lighting or the effect we are targeting. The lighting environment could be composed by many sources in order to approximate the real effect we are trying to achieve.

OpenGL lighting implementation are dynamic lights, which are light point abstraction which allow to "light" (that is, give a color) to rendered vertices (which are used for render triangles). ...the vertex is illuminated, get color contribution for each light.

As you mentioned, the rendering process take more time more lights we has enabled. To minimize this, you have different possibilities.

  • Light culling (exclude lights which contribution is to little to change the color), and this is determined using light properties (distance, cone, attenuation, point of view and obstructing objects).
  • Static lighting, which uses textures to emulate lighting on objects which never moves.

OpenGL fixed lighting contributes to vertex color, which is interpolated with other vertex colors in order to rasterize the triangle. In the case the geometry is composed by few triangles, you cannot see any light cone inside the each triangle, because its fragment's color are the result of the interpolation of three colors (the three vertices).

To achieve a more precise lighting, the software shall determine each fragment (pixel) color (pixel lighting) in the same way a vertex is colored by lights, but as you can understand, there could be more pixels than vertices. An approach is to compute (using shaders or an OpenGL extension) the light's contribution for each pixel of the geometry during the rasterization phase, or determine the pixel color using deferred lighting.

Deferred lighting uses multiple textures (corresponding to the viewport) to store lights parameters for each displayed pixel. In this way you execute light computation after the image is produced, determining pixel light contribution once for each pixel, instead of once for each geometry pixel.

Luca