This page has been translated automatically.
UnigineScript
The Language
Core Library
Engine Library
Node-Related Classes
GUI-Related Classes
Plugins Library
High-Level Systems
Samples
C++ API
API Reference
Integration Samples
Usage Examples
C++ Plugins
Content Creation
Materials
Unigine Material Library
Tutorials
Warning! This version of documentation is OUTDATED, as it describes an older SDK version! Please switch to the documentation for the latest SDK version.
Warning! This version of documentation describes an old SDK version which is no longer supported! Please upgrade to the latest SDK version.

Rendering Sequence

Unigine uses a multi-pass rendering technique to produce various visual effects, including dynamic lighting, post-processing, and so on. Some of the passes re-render scene geometry, some do not, therefore, some polygons are rendered multiple times per frame. The actual number of polygons to be rendered per frame by the GPU depends on polygon count of the static geometry, used effects, culling efficiency and a lot of additional factors.

In the editor, rendering options for each of the pass are set independently on per-material basis (Materials -> States -> Passes).

Regarding lighting opacity geometry, the rendering pipeline can combine the forward rendering (light passes) and deferred lighting for selected light sources (deferred light passes).

Notice
To render buffers onto the screen, use 5 hotkey.
To skip a rendering pass, go to Tools -> Render tab and check any of the Skip options.

Deferred Pass

The very first to come is the deferred pass which is written into four RGBA 8 buffers. The data from deferred buffers is used by shaders in next rendering passes.

  • Depth buffer. Scene objects in the current field of view (found between the near and far clipping planes) are rendered as pure geometric models and stored into this buffer:
    • RGB channels for depth values. They allow to sort objects relative to the camera in the right order and cull invisible surfaces.
    • Alpha channel to flag the geometry to be rendered with volumetric shadow shafts.

    Initially depth buffer is filled with opaque geometry. Transparent objects are not written into the depth buffer by default, though invisible ones are still successfully culled by the video card. However, sometimes to avoid visual artifacts (like water or particles systems) transparent objects need to be written into the buffer later. For such transparent objects the Post deferred flag in Materials settings is enabled. See below for details.

    Deferred depth buffer

    Deferred depth buffer
  • Diffuse colors buffer stores pure diffuse colors of all material textures:
    • RGB channels for diffuse colors
    • Alpha channel for glow (if Ambient emission option is enabled for the material)

    Deferred diffuse color buffer

    Deferred diffuse color buffer
  • Normals buffer stores normal vectors for each vertex of geometry which will be necessary to calculate the proper lighting:
    • RGB three components of the normal vectors.
    • Alpha channel for specular power that would define intensity of specular highlights.

    Deferred normals buffer

    Deferred normals buffer
  • Parallax mapping buffer stores parallax displacement values for materials with Parallax mapping option giving them the illusion of having depth. This means the flat surface will appear to have a bulgy relief. If there are no materials with parallax mapping, the buffer is not used.
    • RG the X component.
    • BA the Y component.

    Deferred parallax buffer

    Deferred parallax buffer

Notice
All ATI graphics cards based on the R400 chipset do not support deferred buffer masking, if Direct3D9 API is used.

Deferred Light Pre-Passes for Opaque Objects

Unlike the forward light passes, deferred light pre-calculation passes allow simplified, moderate-quality but faster lighting of opaque objects. Deferred lights can be enabled with the distance on per light source basis and thus be used as light LODs.

Deferred lights are pre-calculated and stored in the RGBA 8 buffer (or RGBA 16F if HDR is used). Basing on the information from Depth and Normals buffers, each light source is rendered in a separate pass in the following way:

  • The light's bounding sphere is rendered as geometry.
  • Basing on the sampled deferred data, pixel shader computes colour contribution of the light to each pixel inside its radius in screen space. Diffuse component is stored in RGB channels and specular component is monochrome, because for it only the alpha channel is left.
  • All lights are additively blended into the buffer in their own passes.
  • Deferred lights are now calculated, but they are not rendered at once. They will rendered on the screen only with the next-following ambient light pass. Depending on how further rendering will be performed, deferred lighting can be of two types:
    • Deferred lighting itself. It does not change the ambient pass (see details below).
    • Deferred shading for all lights. Not only the lights passes are simplified, but in addition the ambient lighting pass is also simplified. The pay-off for speed is quality. See details below.

    Notice
    To use deferred lighting, set console command render_deferred to 1 (default). To use deferred shading for all lights, set it to 2. (To take effect, these commands should be followed by render_restart).

As you can see, deferred lights do not redraw the geometry of the lit objects (while normal lights do), hence the performance increases. However, flexibility and accuracy of lighting suffers:

  1. All the lit objects are rendered using uniform Phong shading with default settings.
  2. Specular component, being monochrome, is not exactly precise.
  3. Anti-aliasing is also unavailable.

Deferred lighting pre-pass

Deferred lighting pre-calculation pass

Ambient Light Pass for Opaque Objects

Ambient light pass is the first one to store its color data in the screen buffer: RGBA 16F (16-bit floating point precision per each channel) required if HDR used, or otherwise RGB10 A2.

Taking into account deferred buffers data, only opaque objects are rendered (though they still may use alpha testing). During this pass geometry in the view frustum is rendered lit by the nondirectional ambient lighting, which is calculated in the following way:

ambient color * material diffuse color     Plus:

  • If any, prebaked Ambient texture for the material is also rendered in this pass. It simulates self-shadowing in corners and cracks of objects.
  • If environment cubemap is specified, it also contributes to this pass: according to the colors of cube sides, all objects in the scene are lit and shaded. The global environment texture is modulated (multiplied) by the material's ambient texture.
  • Light maps are rendered during this pass, if they were defined in material settings.
Notice
If HDR is enabled, the ambient color shouldn't be precisely equal to zero. In the case it does, during HDR pass completely dark areas will be overexposed and the contrast of the whole image will be too high.

An image after the ambient pass for opaque objects

Ambient pass for opaque objects. No deferred lighting.

Deferred Lighting and Deferred Shading

Deferred lights are now actually rendered in the ambient pass using the pre-calculated data from the screen-space Deferred Lights buffer:

  • Simplified Deferred lighting. Objects are now shaded on per-material basis: diffuse color and specular components of deferred lights contribute to the materials color.
  • Super-simplified Deferred shading changes the rendering of ambient pass:
    • Ambient pass does not render any geometry at all. It simply uses the colors from the Diffuse colors buffer.
    • Ambient textures are omitted and do not contribute to the resulting pass color.
    • Light maps are also omitted.
    • All lights in the scene become deferred lights and shade objects just like by deferred lighting.

Ambient pass for opaque objects with deferred lighting

Ambient pass for opaque objects. Deferred lighting is enabled.

Lights & Shadows Passes for Opaque Objects

Next goes a series of passes for each of the light sources that re-render opaque objects being lit:

These passes are also rendered into the screen buffer using alpha blending addition. It means that the light is simply added to the color of already rendered pixels. During this rendering pass, alpha test is optimized not to be repeated, as it uses equal depth values from ambient light pass.

But before actually rendering the lights, shadows should be calculated. Shadowing technique directly depends on the light source type (for details see Shadows section):

  • Parallel-split shadow mapping is used for world lights. Depending on the number of set splits (from 1 to 4), PSSM requires the same number of passes to redraw scene geometry. To avoid the worst-case scenario when all the geometry is rendered four times, overlapping areas are optimized to be rendered only in one split.
  • Shadow mapping is used for all the other light sources.
Shadows maps of both types store depth information with 32-bit floating point precision (D 32F textures). They are rendered from the light source point of view to define shadow casters. In PSSM each split is also rendered from its own viewpoint.

Shading

The essential advantage of the multipass rendering is that each object can have unique shading. This model allows flexible shading on per-source basis by creating custom shaders for each material, for example:

and so on.

An image after the lights & shadows pass for opaque objects

An image after the lights & shadows pass for opaque objects

Notice
Try to avoid combining complex geometry with multiple visible light sources per frame, as it drastically decreases overall rendering speed, because each dynamic light produces an additional render pass.

Passes for Decals

Decals are written to Deferred Depth buffer along with other objects, so as to store glow or parallax mapping information. They are rendered projected onto the surfaces in separate passes after opaque geometry is lit and rendered into screen buffer.

  1. First comes the ambient pass, in which the decals are rendered with ambient lighting, just like in the ambient pass for the opaque objects.
  2. Lighting from light sources in the scene is calculated during the next several light and shadows passes (one pass per light source). Again, those passes are similar to the light passes for the opaque objects.

Rendering Impostors

Impostors are also rendered separately from the other objects, and have their own update method. They are also written into the Deferred Depth buffer to be correctly affected by light scattering.

  1. Each frame, before the very start of the rendering sequence, impostors are baked into three textures (RGB10 A2), depending on the distance to the camera and hence, their size. The particular number of slots for impostors in these textures depends on the render_impostor console variable. The largest impostors most close the camera are rendered into the texture with the least slots and are the first to be updated.
  2. During the rendering sequence, baked impostors are rendered in the separate ambient pass.

Ambient Occlusion & Indirect Illumination Passes

After the opacity geometry is rendered lit, the image still seems somewhat flat without self-shadowing. Screen-space Directional Occlusion allow to approximate the effect of global illumination in real time. It combines the following:

  • Ambient occlusion computes how much ambient light, which is equally incident from all directions, has actually reached the surface and how much of it was occluded. As a result, crevices are realistically darkened, while salient parts remain exposed to light.
  • Indirect illumination specifies how the indirect light bounces off objects and brings its color to the neighboring ones. Indirect illumination allows to have red-colored shadows from the red objects, while the ordinary occlusion renders only the colorless grey shadows..

SSDO is computed for all the objects which are stored in the Deferred Depth buffer. It is implemented in the following stages:

  1. Auxiliary pass (RGBA 32F buffer is allocated). Depth and Normals texture are sampled and stored in a new buffer downscaled by half. This pass allows to reduce cache misses.
  2. Directional occlusion pass (another RGBA 16F buffer is allocated). It calculates the amount of incident and indirect bounce light at each point. In the screen space a certain radius around each point is sampled. It means, that in this radius a number of random sample points are tested (the exact number depends on the quality of shaders). All these sampled emitter points can occlude or bounce off the light on the receiver point (and vice versa, the receiver point can receive it) no further than the set distance in world space.
    Directivity of normals is also taken into account. Emitter points can shade and color the receiver point along their normals or in all directions. In the same way, receiver can receive shade and bounced-off light that came only along its normal or from anywhere.
  3. The resulting image is written to the screen buffer using alpha blending technique: the screen image will be multiplied by occlusion texture. In spite of differences in resolutions, there are no stretch-induced artifacts along the edges due to the depth threshold. It allows to render sharp and clear edges, because in areas of depth change only such pixel is sampled from the occlusion texture which depth value is most close to the pixel of the ambient light texture.

Auxiliary downsampling pass

An image after the directional occlusion pass

A resulting image after the directional occlusion pass

Pre-Scattering Passes for Transparent Objects

When shading of the opaque objects is over, transparent objects can be rendered. They are rendered separately as their material should be blended with underneath layers depending on their alpha blending settings.

The problem arises when simultaneously rendering transparent objects and light scattering that attenuates objects with the distance. Deferred Depth buffer can store only one layer of depth data. This means two scenarios are possible:

  • Transparent objects overwrite depth value in the depth buffer (by enabling Post deferred flag in Materials settings). In this case, after being correctly blended into the current scene, using the existing depth values, they overwrite it to compute light scattering. As the result, transparent geometry is correctly attenuated, but all the objects behind it are not affected by light scattering at all.
  • In the other case, transparent objects are rendered before light scattering pass and are not written into depth buffer. As they use the depth value of underneath geometry, they can turn out greatly attenuated (of a very blue color), though being just three feet away.

To avoid these visual artifacts, rendering sequence for transparent objects is flexibly divided into the following stages:

  1. Objects in pre-scattering passes
  2. Light scattering
  3. Objects in post-scattering passes
After back-to-front sorting and rendering in dedicated pass, transparent objects can be also added to Deferred Depth buffer (if Post deferred flag in Materials settings is enabled). Let's take water for example. If written immediately in the deferred pass, the underwater objects are not shaded and have no ambient occlusion. If not written at all, light scattering incorrectly shades the water according to the distance to the bottom. And only if it is rendered in pre-scattering pass and has Post deferred flag, the result will be accurate.

Ambient Pass

Transparent objects are rendered to the screen buffer in the same way as opaque geometry. During ambient pass, their diffuse color is blended according to blending settings with screen image.

How exactly the transparent material will be shaded during this pass by the ambient component, depends on the Ambient pass value. It can be either Opacity or Transparent.

An image after the ambient pass for transparent objects

An image after the ambient pass for transparent objects

Lights & Shadows Passes

Again, series of passes are performed to render light for transparent objects, one pass per each source. During these passes transparent geometry can be shaded in different ways, be it Phong, anisotropy or custom shading.

Transparent geometry is shadowed using shadow maps already stored after light passes for opaque geometry. Besides that, it can receive and cast coloured translucent shadows. They use RGBA 16F textures (RGB channels for color data, alpha channel for depth values) and are rendered from the light source point of view to define only translucent shadow casters.

An image after the lights & shadows pass for transparent objects

An image after the lights & shadows pass for transparent objects

Scattering Pass

Screen-space light scattering is calculated in this pass. Using information from the Depth buffer, it approximates the physical model (Rayleigh and Mie scattering by atmospheric particles) with a set of parameters. Light scattering determines the color of sky and sun, simulates atmospheric effects, attenuates objects with the distance, and on the whole makes outdoor scenes look even more realistic. Light scattering is rendered as a full-screen quad.

Light scattering

Post-Scattering Passes for Transparent Objects

Transparent objects drawn after the scattering pass are not distance-attenuated. These can be windows, objects in enclosed spaces etc - everything that will be seen from close range. If choosing wisely, disregarding light scattering trades off with no visible artifacts, that originate from deferred depth buffer limitation of storing only one depth value. Post-scattering objects are rendered in absolutely the same way as pre-scattering ones.

Ambient Pass

This pass is the same as the previous ambient pass.

Lights & Shadows Passes

A set of passes, one per each light source, as described above.

Refraction Pass

Refraction can be applied only to transparent materials (with Refraction pass set to default). It is rendered in the following way:

  1. The refractive surfaces are rendered in the separate buffer (RGBA 8), using the Deferred Depth buffer.
  2. The color values are displaced (according to the default refraction texture):
    • Red represents displacement based on the surface normal along the X axis.
    • Green represents displacement based on the surface normal along the Y axis.
    • The displacement is scaled by the refraction multiplier, that controls the amount of distortion.

Refraction offsets

Refraction produced by fire

Post-Refraction Passes for Transparent Objects

Transparent materials are affected by the Refraction pass even if they are closer to the camera than the refraction surface. If a transparent material should not be affected, it can be rendered after the refraction pass (by setting Post refraction flag). Other than that, rendering of such objects requires the same passes:

  • Ambient pass for ambient lighting.
  • Lights and Shadows passes for lighting with light sources.

Volumetric Shadows Passes

Volumetric shadows that are spread in the air forming crepuscular rays require two passes:

  1. Sample geometry pass. A new buffer is allocated that stores geometry casting volumetric shadows (Shadow shafts box should be checked for material). Compared to deferred buffer, it is downsized to half and holds only depth data in RGBA 8 format.
  2. Calculating shadows. In the second buffer (which is similar to the previously allocated one) shadows are drawn depending on the direction of world light. No other lights are considered in calculations. The intensity of shadowing and length of the shadowed regions are defined in Render settings.
  3. After these passes are over, volumetric shadows are rendered into the screen buffer using alpha blending addition.

Sampling geometry: black objects will cast volumetric shadows

Calculating shadows

Volumetric shadows creating crepuscular lights effect

Post-processing

Post-processing completes the rendering sequence. During this stage various effects are rendered and the final image is put together with the screen texture in the composite shader.

HDR

HDR is rendered in the following passes (all using RGBA 16F buffers).

  • During the first pass, screen buffer is downsized to half to be faster processed during the next passes.
  • In order to correctly transform the HDR image in the screen buffer to an LDR image and preserve details both in bright and dark areas in the scene, average value of luminance is calculated using the chosen algorithm (logarithmic or quadratic one). This process is called tone mapping. The average of four neighboring pixels is estimated and form a new downsized texture. It takes five successive passes to get one pixel texture indicating the average luminance in the scene that will be a reference for tone mapping. This value, multiplied by custom set exposure and clipped between minimum and maximum luminance, allows to map scene radiance values to a displayable output range.
    It should be noted though, that scene luminance is calculated with three frames delay. That, plus the set adaptation time determines the time the camera adapts to the lighting conditions change.
  • To create HDR illumination effect, in the next pass the bright areas that exceed the brightness threshold are extracted from the screen buffer. This pass is also downsampling one.
  • Then the extracted areas are blurred using smart two-pass filter. It uses vertical and horizontal sampling, which increases sampling rate.
  • The resulting HDR image is passed to the composite shader to be additively blended with screen buffer image.

Notice
Due to usage of a logarithmic function, the HDR image should not contain areas filled with a pure black color, meaning ambient color should not equal 0.
Usage of HDR and anti-aliasing at the same time is supported only by DirectX10-compatible graphics cards and higher, as well as cards based on ATI R500 (only in Direct3D9).

Here is how tone mapping for HDR looks like:

Source downsized image used to compute average value of luminance




An image with HDR corrected according to the average value of luminance

To create special HDR effects, additional filters can be enabled. All of them operate on the buffer with already extracted bright areas.

  • Cross flares are rendered in several passes: fading out copies of glaring objects are added with small offsets in the direction of the flare. The process continues until the defined length is reached, and is repeated for each flare shaft.
  • Lens flares require only one pass, during which several copies of glaring objects are added to create a flare coming from the center of the screen.
  • Light shafts also require one pass: in the direction of world light copies of the glaring objects are drawn until the set length is reached.

A buffer with extracted bright areas and HDR effects

An image with HDR effects

Motion Blur

Motion blur effect does the following:

  • Blur the scene when the camera moves.
  • Blur objects with physical bodies when they are in motion. Only those physical objects will be considered, materials of which are rendered into a separate Velocity buffer. It stores velocity vectors that bodies had in the previous frame and blurs the objects accordingly in the current one.

Motion blur for the camera

Depth of field

The Depth of field effect can be of two types, used depending on render_dof console variable value:

  • Gaussian blur blurs an image by a standard Gaussian function.
  • Bokeh effect modulates a polygonal shape of the camera aperture with core\textures\render_dof_iris.dds texture.

The rendering of DOF effect takes the following passes:

  • The first DOF pass downsizes the screen buffer.
  • Then from the point of focus (i.e. the focal distance) the distance where DOF takes effect (things are blurred until they are completely out of focus) in both directions is measured. Distance measured towards the camera is Near blur range. Measured away from the camera is Far blur range. Their blur and focal power values (set separately for near and far ranges) determine how smooth interpolation between the non-blurred and the blurred areas is performed.
    Out-of-focus areas are blurred using a smart algorithm based on depth values from a deferred buffer. This algorithm uses an optimized separable two-pass filtering that is sequentially performed in two passes: horizontal and vertical blurring.

Glow

To create a glow effect, objects exceeding the brightness threshold are extracted into three downsized textures and brightened by small, medium and large glow multipliers during the first pass. In the next two passes they are blurred with a separable two-pass filter (horizontal and vertical). The resulting glow image is passed to the composite shader to be additively blended with screen buffer image.

Color correction

Color correction allows adjusting brightness, contrast, and saturation. Also, an arbitrary color transformation can be done by means of LUTs, special 3D textures, which set the correspondence between input and output colors.

An example of a LUT texture (rotated by 90 degrees and resized)

An example of a LUT texture (rotated by 90 degrees and resized)

Blue tone

Auxiliary Pass

Auxiliary pass allows you to write the contour of object with the specified material into an auxiliary color buffer (if enabled). After that, you can apply a postprocess material to it. Postprocess materials are part of the Unigine standard material library helping to render some special effect for a surface (for example, blurring, subsurface scattering, etc.)

You can also write your own shaders and apply your custom postprocess using this buffer.

Auxiliary buffer

Auxiliary buffer with contours

Auxiliary color can also be used in an overlay color mode (used for flat-coloring of meshes in the main viewport). Depending on the alpha channel of the auxiliary color, objects are rendered fully or semi-colored:

Different auxiliary colors and transparency

Auxiliary color rendered onto the screen
Last update: 2017-07-03
Build: ()