views:

62

answers:

2

What's the difference between a typical HDR rendering pipeline and a normal rendering pipeline? (i.e. bpp differences? Some additional post processing step?)

+4  A: 

HDR rendering requires the use of floating point buffers, so there is a difference between bytes per pixel. a RGBA8 buffer uses 4 bytes per pixel, but a RGBA16F buffer uses 8 bytes per pixel.

And when displaying the floating point buffer, you need to do some postprocessing so the signal makes sense, since a FP number can be outside the [0,1] range, you postprocess the FP buffer to convert it to a normal [0,1] RBGA8 buffer, and that's done with a tone mapping operator.

Matias Valdenegro
+1  A: 

The pipelines are fairly similar. Things to bear in mind is that you can now use 3 floats (ie RGB) for representing light sources. This allows you to set a light sources brightness SIGNIFICANTLY brighter or dimmer.

As mentioned already yes you need to use a floating point render target.

Do not saturate in your lighting shaders as this clamps you back to the 0 to 1 range.

There are 2 ways to post process the image. One is to simply compress the range back into a 0 to 255 range before writing to the backbuffer. This would be entirely pointless, however, as it would lose you the point of having HDR. The better thing to do is to write an exposure filter.

Its also worth noting that most people will apply camera effects to the saturated sections of models post exposure filtering. The most common form is the "bloom filter" that we've all seen over used in films. There are, however, loads of different filters you can use to provide nice effects. Search for the "Streak filter" for one very useful effect to combine with blooming.

Loads of good general info here.

Goz