This page has been translated automatically.
Setting Up Development Environment
High-Level Systems
UUSL (Unified UNIGINE Shader Language)
File Formats
Rebuilding the Engine and Tools
Double Precision Coordinates
Common Functionality
Controls-Related Classes
Engine-Related Classes
Filesystem Functionality
GUI-Related Classes
Math Functionality
Node-Related Classes
Networking Functionality
Pathfinding-Related Classes
Physics-Related Classes
Plugins-Related Classes
Rendering-Related Classes
Warning! This version of documentation is OUTDATED, as it describes an older SDK version! Please switch to the documentation for the latest SDK version.
Warning! This version of documentation describes an old SDK version which is no longer supported! Please upgrade to the latest SDK version.


In a broad sense, postprocessing is everything that is performed after main steps of image rendering. Postprocessing is intended to improve visual quality of the rendered image by adding special effects. These effects include various filters like blurring or sharpening, color tinting, etc., and they may be familiar to you from photo postprocessing.

To enable postprocessing, the image is first rendered into a special buffer called render target. Postprocessing effects implemented as pixel shaders are then applied to the content of this buffer. At last, the image is displayed on the screen.

Some of the most widely used effects are briefly described later in this article.


Refraction is the effect occurring when the light ray passes from one medium to another, changing its speed and direction. This phenomenon is commonly encountered in every-day life: just recall the distorted glass, heat haze, or objects immersed in water. Reflection being visually impressive is amenable to implementation and simulated in real-time quite easily. It is used only for transparent objects.

Example of no refraction

Scene as seen through stained glass without refraction

Example of refraction

Scene as seen through stained glass with refraction


Pictures taken by photo cameras usually have some areas where the image is unfocused or blurred. On the other side, rendered images are sharp and highly detailed, which makes them look unnatural. To imitate the effect of blur, special filters are used.

2D Blur

One dimensional Gaussian blurring is a combination of 2 separate postprocesses: horizontal blurring and vertical blurring. This approach allows to reduce the number of calculations and achieve visually plausible results.

Radial Blur

Another type of blur, which also helps to represent action, is called radial or zoom blur. It can be smoothly used, for example, in a car racing simulator to create an effect of high-speed movement.

Radial blur

It is clearly seen that the camera zooms at the door

Depth of Field

To make the rendered image more photo-realistic, use the depth of field filter. This filter allows imitating camera focus, and therefore, making unfocused objects look not as sharp as they initially are. The filter blends the original image and its blurred version, which is created based on the pixel depth values.

No depth of field

The scene is rendered without the depth of field filter

Depth of field

The scene is rendered with the depth of field filter, and only a couple of leaves are in focus

Color Transformations

Different color transformations include color correction, brightness and contrast adjustment—everything to which you got used to when editing pictures in a 2D image editor. Usually, such transformations can be performed without high computational expenses.

Sepia filter

The scene is rendered with the sepia color filter

Orange filter

The scene is rendered with the orange color filter

Edge Detection

Another filter that can be used in postprocessing is edge detection. This filter can be used to draw thick lines like in toon shading, also it can be combined with various other filters to create artistic effects.

Outlined edges

The scene is rendered with the Sobel filter, which outlines the edges

Anaglyph Mode

Anaglyph images provide a stereoscopic effect, when viewed with special two-color glasses (anaglyph glasses). In such images, two color channels, red and green/blue, are shifted with respect to each other to create a depth effect. For nearer objects, the shift between the channels is smaller. For farther objects, the shift between the channels is larger.

Anaglyph mode

The scene is rendered in the anaglyph mode. You can test it, if you have anaglyph glasses

Light Scattering

If atmospheric scattering is taken into account, it gradually attenuates distant objects and adjusts their shade. Computations that result in the scattering visualization are based on the position of the viewer and the positions of the objects.

Simulation of atmospheric scattering

Simulation of atmospheric scattering

For more detailed information see the following article: Rendering Outdoor Light Scattering in Real Time (pdf).

Last update: 2017-07-03
Build: ()