для этой страницы,
отображается код на другом
Unigine has a combination of a full deferred renderer with forward rendering techniques:
- All opaque (non-transparent) geometry is rendered in the deferred pass.
- Transparent geometry is rendered in the forward pass.
Before the deferred rendering pass, the engine performs all necessary calculations. It specifies what nodes should be rendered, the order of rendering, how many lights there are in the scene, and so on.
This article introduces the detailed Unigine rendering pipeline including all auxiliary stages, which can be skipped by the user.
Rendering Pipeline Overview#
A frame is a complex structure including a lot of different calculations and textures rendering.
Here is the rendering pipeline, all stages are performed during one frame composition. Some stages can be switched off, if necessary: decals rendering stage, auxiliary pass, post-materials, etc.
During rendering, the engine initializes necessary texture buffers and most of them are not cleared completely (the engine cuts out sky areas). Some passes just re-use these already created buffers. Such optimizations help to increase the rendering performance.
The schematic representation of the rendering pipeline displays the main stages of the rendering sequence.
The frame rendering process will be illustrated by screenshots from a simple test scene with a variety of materials (transparent and opaque), environment probes, decals, planar reflections, different light sources.
The Common pass is the very first step of the rendering sequence. In this pass, environment LUTs and the Environment cubemap are rendered. These textures are rendered only once before all the geometry in the scene and will be used further for the final screen composition: for scattering rendering and non-dynamic reflections.
Environment cubemap is composed of six pieces (256x256 px each).
Schematic representation of the 1x256 LUT textures.
LUT textures are represented as 1x256 px textures taken from the current state of the loaded LUT textures for scattering.
During this step, nothing is rendered at all. This stage prepares the scene before geometry rendering: the engine analyzes the visible part of the scene to know which objects should be drawn and which shouldn't. The engine does the following:
- Checks intersections of objects with the frustum (frustum culling) and finds the objects that are present in the frame.
- Finds all light sources in the scene.
- Finds all surfaces that should be rendered (taking into account occluder objects).
- Sorts Lights and Environment Probes by size (from large to small).
- Sorts Decal objects by their rendering order.
- Sets the Occlusion Query flags to objects that shouldn't be rendered.
- Surfaces Batching. All opaque surfaces are grouped and rendered in batches according to materials assigned, thus decreasing the number of DIP calls and, therefore, increasing the performance.
Occlusion Query works with a delay: it is a very performance-hit operation that requires much time. The optimization is the following: the engine checks the scene asynchronously and sets the "should not be rendered" flag during the Occlusion Query stage. That's why occlusion query runs at least 1 frame late. But the object will be hidden without "hard switching" when the FPS value is high.
After calculations, shadow maps are rendered. The way of rendering these maps depends on the type of a light source. Generated shadow maps are used in the following rendering stages.
The shadow map of the world light source is rendered into texture divided into four parts (since the 4 is the maximum quantity of shadow cascades).
Shadow map of the world light source
Shadows near the camera have higher resolutions, while shadows that are far away from the camera have fewer details.
Rectangular shadow cascades on the final scene
Omni light sources use perspective projection for shadow maps. Each of these light sources uses 6 cameras that generate shadow maps.
Keep in mind that using a lot of Omni lights can sufficiently decrease the performance.
Shadow maps for Proj light sources are rendered only once because only 1 camera is used.
During this stage, the engine renders necessary textures for dynamic reflections (i.e. changed each frame): cubemaps for environment probes and 2D textures for planar reflections. To render these textures, the engine goes through all rendering pipeline (but misses some passes described below).
Cubemaps are generated by using 6 cameras, planar reflections use only 1 camera. Both render the final image (one more rendering cycle) with some hooks:
- Textures use shadow maps, that were already generated in the previous shadow maps rendering pass.
- All post effects are ignored.
- Final image for dynamic reflections is rendered missing TAA.
The final texture of dynamic reflections
Deferred Pass for Opaque Objects#
This pass is one of the key passes of rendering: all opaque geometry is rendered during this pass one by one.
In the end of this pass, the engine has the texture (color texture) with all opaque geometry and scattering.
Native Depth Pre-Pass#
In this pre-pass, the GPU performs a depth-test for surface culling. The pre-pass is performed only for alpha test and complex materials (for example, with a lot of layers).
The depth buffer stores native depth values z/w.
The depth data stored in a depth buffer texture format D32F
During this pass, the GPU can discard a pixel (shading for this pixel won't be calculated).
Filling the GBuffer#
During this step of rendering, the engine fills the Gbuffer (Geometry buffer) for shading.
The depth buffer contains scene objects in the current field of view (found between the near and far clipping planes). Objects are rendered as pure geometric models and stored into this buffer.
The engine also uses native depth.
The depth data stored in a depth buffer texture format D32F
The albedo colors buffer stores pure albedo colors of all material textures:
The texture format is RGBA8 (RT0):
The shading buffer stores shading information on objects in the scene.
The texture format is RGBA8 (RT1):
Translucency is used to simulate light shine through objects (leaves, for example).
The normal buffer stores normal vectors for each vertex of geometry which are required for calculation of the proper lighting.
The texture format is RGBA8 (RT2):
The velocity buffer stores information about the displacement of pixels per frame. When the image is still, the buffer is filled with zero (black) values. These values are necessary to make the temporal anti-aliasing (TAA) and motion blur work correctly.
The texture format is RG16F (RT3):
You can disable the buffer, if you don't need velocity-related effects: motion blur, TAA, etc.
The material mask texture. The first 8 bits are reserved for post effects, other bits can be used for materials.
The texture format is R32U (RT4):
The lightmap buffer is used to add baked light to the scene.
The texture format is RG11B10F (RT5):
During the Occlusion query step, the engine excludes objects (geometry, lights, and decals) that have the Culled by occlusion query flag enabled, from the rendering sequence.
Occlusion query is an operation with a lot of calculations (i.e. significantly affects performance) and is executed asynchronously. The engine checks object bounds and sets the flag to the renderer, if necessary. In the next frame, during the Calculation stage, the renderer excludes the object from the rendering sequence.
During this step of the deferred pass for opaque objects, the engine renders decals. Decals have been already sorted on the Calculations stage, and during this stage, the engine performs rendering by using this order.
Decals are rendered by using alpha blending, but since A channels of the normal and shading textures are occupied by microfiber and roughness values, the engine copies necessary values into new textures and performs alpha blending.
New Normal texture (uncompressed) that is used for alpha blending
New Roughness texture to store Roughness value in R channel, and Microfiber value in G channel
This rendering stage is responsible for shoreline wetness effect of the Global Water object. The behavior of this step is like in the previous Decals rendering step. So we can call this step Post-Decal Rendering.
The effect is performed by using Compute Shaders and Unordered Access Textures techniques.
Linear Depth, Color Old Reprojection and Unpacked Normals#
To perform screen-space effects, the engine renders Linear Depth, Color Old (previous frame) Reprojection, and Normal Unpack (screen-space normals) textures.
The linear depth texture is generated by using G-buffer depth. Mips of linear depth, color old, and normal unpack textures are also created.
During this stage, Screen-Space Ray-Traced Global Illumination post-effect is rendered, during which the engine fills the following textures:
- SSAO, if the values of render_ssao and render_ssao_ray_tracing console variables are set to 1, defining whether SSAO and SSAO Ray Tracing should be performed.
- SSGI, if the values of render_ssgi console variable is set to 1, defining whether SSGI should be performed.
- Bent Normal, if the value of render_bent_normal_ray_tracing console variable is set to 1, defining whether Bent Normal should be performed.
Bent Normal is rendered in the RGBA8 texture.
Bent Normal also may have own TAA depending on:
- If noise is enabled and SSRTGI is rendered in full resolution, TAA is not applied during this stage.
- If noise is enabled and SSRTGI is rendered in half resolution or quarter resolution, TAA is applied.
New Ray-Traced SSAO texture
New Bent Normal texture
Deferred Light Pass#
In the deferred light pass, the engine creates a buffer (Deferred Light Map), which is a 2D array of RG11B10F textures. The first layer of the array is for diffuse light, the second is for specular (which can be switched off for VR).
Light sources use already generated shadow maps.
All lights sources are rendered one by one in a single 2D array to be applied during the Deferred Composite.
If you have 2 or more world light sources, 1 world light source is always rendered during the deferred composite, and other light sources are rendered during this step.
Light sources diffuse layer
Light sources specular layer
The engine uses tile rendering technique for omni lights without shadows. With this optimization, omni light sources are grouped and rendered in batches, decreasing the number of DIP calls and, therefore, improving the performance.
Environment probes are rendered into the 2D array of RGBA16F textures: the first layer contains reflections, the second layer contains ambient light.
The renderer doesn't clear the deferred reflection textures if at least one environment probe has infinite size.
Reflection texture of Environment Probe
Ambient texture of Environment Probe
Planar dynamic reflection is applied
The auxiliary pass is a type of custom pass. Materials with the Auxiliary pass enabled are rendered in the auxiliary RGBA8 texture.
The auxiliary texture is often used for different post effects. This texture has its own TAA stage.
Auxiliary buffer texture
During this stage of the pipeline, refractions are rendered. They are rendered in the refraction texture and applied during the transparent object stage.
Refraction is the RGBA8 texture. The texture contains distortion values.
Refractions buffer texture
Transparent Blur Surfaces#
Transparent Blur is the R16F texture that contains blurriness values.
Transparent Blur buffer texture
Screen-Space Reflections are rendered in RGBA16F textures. They are applied to the deferred composite stage.
SSR buffer texture
There are two different types of SSR: with importance sampling and without it:
Importance Sampling is On#
If the importance sampling is enabled, the renderer doesn't use linear depth texture.
In this case, the SSR color texture and SSR velocity texture are rendered. Velocity is used for avoiding artifacts whilst the camera or objects are moving.
SSR has its own TAA stage.
Importance Sampling is Off#
In this case, three textures are rendered: SSR velocity, SSR color texture, and ray-length texture. After that, the renderer applies TAA for the SSR.
The three textures mentioned above are used to perform blurring to provide realistic reflection behavior.
Screen-Space Ambient Occlusion is rendered in its own R8 texture. It is applied in the deferred composite stage.
SSAO can be rendered with or without noise. This option has influence on TAA:
- If noise is enabled and SSAO is rendered in full resolution, there is no TAA during this stage.
- If noise is enabled and SSAO is rendered in half resolution, this stage has its own TAA.
Screen-Space Global Illumination is rendered in one RG11B10F texture.
SSGI stage also has its own TAA.
During this step, the Underwater Fog texture is initialized and cleared by the renderer to perform the underwater world lighting correctly.
During this stage, the engine uses all necessary textures from the previous stages and passes to create the final texture with opaque geometry. The engine combines buffers and calculates shading for the final texture of this pass.
During this stage, the renderer calculates the light of one world light and applies it. Environment reflections, ambient, and haze (scattering) are also rendered here.
The final image of the deferred pass for opaque geometry (excluding the Emission pass and SSS)
The emission pass follows creation of the deferred composite image and applies the emission effect over this image.
Transparent Objects Rendering#
Almost all transparent objects are rendered in the forward rendering pass.
During the forward pass, the renderer fills deferred buffers to let forward transparent objects participate in post-effects.
During this stage of the rendering sequence, the refraction texture (which has been already rendered) is applied.
Example of the Clouds texture content
Due to optimization, Volumetric Clouds are rendered in a certain order depending on the following conditions:
- Before Water and Transparent Objects—if the value of the render_clouds_transparent_order console command is set to 0, and the current camera is below the water highest point or under the surface.
- Between Water and Transparent Objects—if the value of the render_clouds_transparent_order console command is set to 0 and the current camera is above the water highest point.
- After Water and Transparent Objects—if the value of the render_clouds_transparent_order console command is set to 1.
Water rendering in Unigine is really complex: it has its own deferred buffer (called WBuffer) with light and environment probes passes.
FieldHeight and FieldShoreline#
First, the FieldHeight and Field Shoreline textures are rendered.
The Field Height textures are rendered in a 2D array of R16F or R32F textures (depending on settings). All Field Height textures are packed into a 2D array to pass the data to the shader.
All Field Shoreline textures are also are packed into a 2D array (of RGBA8) to pass the data to the shader.
The renderer initializes the WBuffer.
The diffuse color of water is black, and diffuse texture is necessary for decals that will be displayed over the water surface.
The texture format is RGBA8:
The Normal texture stores normals for lighting, and alpha channel stores mesh transparency values (it can be used for soft intersections with water geometry).
The texture format is RGBA8:
The Water texture is used to create the procedural foam mask. The mask shows where foam will be depicted.
The texture format is RG8:
We use a Deferred Constant Transferring approach for water meshes. R channel of this texture stores the ID value of the water mesh, which is used to load corresponding textures and parameters for the mesh.
The texture format is R32U:
The Underwater mask is used only for Global Water, since the water mesh doesn't have an underwater mode.
The texture format is RGB8:
The texture format is RGBA16:
Copy Opacity Screen#
During this step, the renderer copies the opacity screen that has been rendered before, into a new texture.
Clear Buffers Textures#
During this stage, the renderer clears buffer textures to prepare for water rendering. It clears reflection texture (both specular and diffuse), Water buffer texture, and, if there is the water mesh in the scene, Constants and Underwater Fog textures.
Select the Water Mode#
The renderer checks the camera position to know what part of water should be rendered. There are three modes:
- UNDERWATER. Only underwater is rendered.
- OVERWATER. Only the upper surface of the water is rendered.
- BOTH. Both underwater and overwater is rendered including the separating waterline.
Filling the WBuffer#
During this step, the WBuffer textures are filled with the corresponding data.
During this stage, water decals are rendered. Decals are rendered (normal and diffuse) by using alpha-blending.
Water Lights and Environment Probes#
During this stage, underwater shafts are rendered by using underwater mask shaft samples values (stored in B channel). They are rendered only if the camera has UNDERWATER mode or BOTH.
After all water stages, the renderer writes two composite textures for the water pass:
- For underwater
- For overwater
Then the waterline (a black separating line) is rendered.
After the composite texture is rendered, the depth-of-field effect for water is rendered. It performs blurring over the whole texture.
If Global DOF is enabled, it uses depth of water (water is rendered to the depth texture which already contains rendered opaque objects) to perform DoF blurring correctly.
Sorting Transparent Objects#
Transparent objects are sorted from the farthest to the nearest and are rendered one by one.
Transparent Objects Rendering#
Transparent objects rendering has two modes: Multiple Environment Probes enabled/disabled, depending on the value of the Multiple Environment Probes parameter of the mesh_base material.
Multiple Environment Probes Enabled#
First, the Depth Buffer is rendered and the Environment Probes texture is cleaned.
Environment Probe Rendering#
During this stage, environment probe ambient and reflection are rendered into Environment Probes texture.
By using the Environment Probe texture, the renderer lerps sky ambient and reflection with the Environment Probe texture.
The renderer adds all the lights to the texture.
Multiple Environment Probes Disabled#
During the ambient pass, the environment, lightmaps, and emission for transparent objects are calculated.
During the Light passes for transparent objects, the light is calculated. Renderer goes only through the passes that are specified in the material states.
Ultimately, all lights for transparent objects are added one by one.
After the transparent objects rendering
Filling Deferred Buffers#
Transparent objects also write information into deferred buffers to let them participate into post-effects.
The following buffers are modified during this stage:
- Opacity Depth
- Material Mask
Color Texture Copy#
SRGB Correction and Static Exposure#
In this pass, the engine performs SRBG correction and static exposure (if enabled). If static exposure is off, the engine goes to the Adaptive Exposure stage.
After the SRGB correction
During this step, the engine renders luminance texture and applies it to the screen by using the corresponding exposure: logarithmic or quadratic.
After the Adaptive Exposure
Temporal Anti-Aliasing (TAA)#
This pass is used when TAA is enabled. Otherwise, the engine skips this pass and goes to the next one.
After the TAA
TAA uses previous frames to improve the current frame by using linear interpolation.
During this stage, blurriness behind transparent objects is applied. The renderer uses the texture from the Transparent Blur Surfaces stage.
The frame with the Transparent Blurriness applied
Render Post Materials#
In this pass, the engine generates procedural textures for Render Post Materials that should be affected by the Camera Effects.
Almost all camera effects are rendered in their own textures to be used in the final screen composition stage to create the final image.
Motion blur uses Velocity buffer for texture blurring. It doesn't have its own buffer and changes the screen.
Depth of Field (DoF)#
For the Depth of Field effect, the engine generates a DOF RG8 mask.
- R channel stores the farthest blurring values (that are behind in-focus objects).
- G channel stores the nearest blurring values (that are in front of in-focus objects).
DOF works with the RG11B10F texture for better performance, therefore, if the screen texture has RGB16F, it would be converted into RG11B10F during this step. Additionally, the renderer performs some accuracy operations to improve the texture within the boundary areas of objects that are in focus.
Chromatic aberrations are also rendered here.
The renderer generates a new texture, pixels of which are brighter than the specified threshold. This texture determines the areas that will be illuminated with other camera effects mentioned below:
If bloom is enabled, the engine generates up to 8 bloom textures: each texture has a lower resolution (original size, original size /2, original size /4, and so on) with the bloom effect. After being generated, all bloom textures with different resolutions compose the final bloom texture.
The Cross and Lens textures use a bloom texture (not the final one) which has 1/4 screen resolution even if bloom is not rendered.
The Cross texture uses the original size /4 bloom texture to create the cross-camera effect even if bloom is disabled.
The Lens texture also uses the original size /4 bloom texture to create the bloom effect even if bloom is disabled. Dirt on the lens is also applied here.
Final Screen Composition#
During this stage, the engine assembles the final texture by using all textures that were rendered during the previous steps. Filmic tone mapping, dithering, and the LUT texture are applied here during the final screen composition.
Final screen composition
After the final screen composition, all post effects are applied.
If a procedural texture is necessary for a post material, it is calculated during this step and then the post effect is applied to the final composed screen.
Post Engine Materials#
With this engine post effect, the engine cuts out the edges of the screen that are outside the main window.
If FXAA is enabled, the engine doesn't perform the TAA pass and performs anti-aliasing here by using FXAA algorithm.
If the Sharpen effect is enabled, the engine applies it to the final screen during this step.
Post Debug Materials#
During this step, Debug Materials are applied to the composed image.
During this step, transparent surfaces with enabled Overlapping are rendered. This is the very last rendering stage for surfaces that allows creating nodes that are not affected by post effects, such as tooltips.
During this stage, the engine visualizer is rendered: mesh wireframes, bounding boxes, nodes culled with occlusion query, etc.
Fade material is used to create smooth screen blacking. It is usually used for beginnings and endings of tracks for Tracker.
GUI is rendered last after all stages and passes are rendered.
After GUI rendering
Finally, the engine has rendered a single frame!