Jump to content

Too much unloading


photo

Recommended Posts

Posted

Since 2.19 I have a problem that geometry and textures seem to be unloaded from memory too readily. I'm working on a VR project, and when I turn my back, the world behind me will disappear so that when I look back again it will be reloading meshes that were previously displaying, and also replacing blank colours with textures. I'm sure my world has too many polygons, but 2.18 did not display this behaviour - once it had loaded I could move around the world with nothing being built in front of me...

Is this expected? Is there anything I can do to fix it?

Posted

Hi Aaron,

There have indeed been changes in how the engine now obtains available GPU memory from the system.

By default, on Windows, we rely on the values reported by the GPU driver. NVIDIA and AMD typically recommend using around 80% of the total dedicated VRAM available. However, the driver may report lower values. For instance, if you have resource-intensive applications running in the background, such as Google Chrome, you might see values as low as 40%. Consequently, your 12GB of VRAM could effectively be reduced to 4.8GB, which is far from ideal. :)

So the first thing you need to check is the current VRAM usage in Windows Task Manager. Pay attention to the Dedicated VRAM usage and Shared VRAM usage numbers: image.png

If dedicated memory is nearly full and you have large values in shared memory, it is a clear sign that you are running out of VRAM. You need to find a way to reduce VRAM usage for that specific configuration.

Try closing GPU-intensive applications that may be running in the background. Typically, this includes web browsers such as Google Chrome, but other software that utilizes the GPU for rendering user interfaces or performing various tasks — such as digital content creation (DCC) tools like 3ds Max, Maya, Blender and others.

Since you are developing a VR application, it will always be more VRAM-intensive than a regular 3D application. Additionally, some extra VRAM will often be utilized by third-party applications such as SteamVR Home, Oculus Home, and similar programs, along with driver overhead for VR rendering. Therefore, it is crucial to create an application that does not attempt to use 100% of the VRAM. Instead, it should aim to stay within the limits recommended by hardware vendors, which is approximately 80% of the available VRAM. If you are approaching these limits, the engine will attempt to balance near that threshold by dynamically loading and unloading meshes and textures.

In addition, version 2.19.x has a GPU memory leak at engine startup (~200MB). This bug has already been fixed in our internal builds and will be included in the upcoming 2.19.0.2 update, scheduled for early November. You can also mitigate this issue by executing the console command memory_clear after world is being loaded, for example.

--

In order to get a better understanding of how engine is using memory and for what, you can take a look at the memory profiler output. To do so you need to run following command in console (either inside Editor or in a final build):

  • show_profiler_render 0 && show_profiler_memory 1 && show_profiler 1
    and to disable profiler later simply enter show_profiler 0

At a first glance you are interested in two values:

  • GPU VRAM usage
  • GPU VRAM free

That numbers indicates how much VRAM is used for the current rendering. If you are doing this test in Editor, you can go to the Windows -> Settings -> Runtime -> Render -> Screen and select some HMD from the drop-down list to change how the engine is rendering it's viewport in order to "simulate" the resolutions that we normally use in VR.

In my case changing HMD to HTC Vive Pro almost doubled the VRAM usage on an empty scene. Mostly it's caused by the increased resolution of rendering buffers that were used in post-processing (VRAM Render Buffers went from 300MB to 1000MB).

You can reduce amount of memory needed for rendering buffers by turning off some post-processing effects (or lower it's resolution) and also by turning of some engine features. Yes, the visuals may be not so good afterwards, but if you need to save some VRAM you have to make some tradeoffs here. You can also check our VR demos (such as Foxhole and Oil Refinery) and grab the fully optimized VR rendering settings from them, trying to find balance between visuals and performance. If I would use vr_training_low.render preset from Oil Refinery, VRAM Render Buffers will reduce it's size from 1GB to 190MB in VR.

More hints of VR content optimizations can be found in our documentation:

You can also use built-in content profiler in order to find out exact meshes and textures that are taking a lot of memory while rendering:

If you find yourself in a situation where you have disabled all third-party applications, adjusted the rendering preset, and reduced the render buffer size, yet are still running out of memory, there are additional steps you can take. These steps are based on the output from the memory profiler (or information from the content profiler as well). Essentially, you need to examine other large values in the profiler and determine their origins.

For example, there might be a lot of textures that are displaying on a screen and their resolution is simply too big. In that case adjusting objects visibility distance or reducing textures resolution may help. You can also try to use texture's mipmaps streaming and allow engine to unload not needed mip levels by turning on Windows -> Settings -> Runtime ->Streaming -> Textures -> Mipmaps. In some textures-heavy projects it may significantly reduce the amount of VRAM needed, but it's for sure will require content adjustments, since now mipmaps will unload based on "texture density" value, and default "1.0" maybe not enough and you will see blurry textures more often.

For mesh_base materials you can adjust texture streaming density in Textures tab:

image.png

And for shader graph based materials in Parameters section:

image.png

Or you can simply limit the max resolution of textures in Windows -> Settings -> Runtime -> Render -> Textures -> Max Resolution (or use Quality settings).

If you have thousands of small meshes (each less than 128 KB on disk) in VRAM, they will always consume 128 KB each. For instance, consider a scenario where you have 10,000 meshes on disk, with each mesh being 15 KB. The total size on disk would be approximately 150 MB. To display these meshes on screen, they must be copied into VRAM, where they would consume 10,000 * 128 KB = 1,250 MB (~8x more than their size on disk). This increase in memory usage is due to GPU memory alignment requirements on desktop platforms, such as DX12/Vulkan for Windows and Vulkan for Linux. The smallest possible alignment for buffers is 64 KB, and since each mesh has two buffers (indices and vertices), the final memory footprint can be enormous for such content. In this case, it is advisable to merge meshes during import or adopt a different approach to content creation, ensuring that the final scene does not consist of thousands of small meshes that could ultimately fragment GPU memory.

All of these recommendations essentially suggest content profiling and adjustments, which can typically be completed in a reasonable timeframe and yield optimized results for specific scenes and hardware.

If you absolutely cannot make any changes to the content and still need to render everything, you can override the streaming limits. However, this is not a recommended option, as it places you in a gray area where the GPU driver can arbitrarily and unpredictably transfer resources from dedicated (fast) VRAM to the slower shared RAM without any control from our side. If, for any reason, it moves any of the render buffers into RAM, you will experience significant performance issues, including a 5-10x drop in performance, as well as spikes and additional lags. Therefore, we strongly advise against this approach :)

First, you can enable force streaming to eliminate the issue of not seeing meshes when you move your head. Instead of experiencing a delay, you will notice a brief spike followed by the meshes appearing on the screen. To do this, navigate to Windows -> Settings -> Runtime -> Render -> Streaming -> Meshes GPU -> Streaming Mode -> Force.

After that, enter the following dangerous commands in the console to allow the engine to use as much VRAM/RAM as it needs:

  • render_streaming_vram_overcommit 1
  • render_streaming_vram_budget 2
  • render_streaming_usage_limit_vram 100 
  • render_streaming_usage_limit_ram 100
  • render_streaming_free_space_vram 0
  • render_streaming_free_space_ram 0
  • render_streaming_meshes_mode_vram 1

This will enable the current 2.19 engine to function very similarly to version 2.18, albeit with a slight difference. The 2.18.x and earlier engine versions can request the driver to allocate even more memory (for example, 150% instead of 100%), but that's a different story :)

Hope that helps!

  • Like 2
  • Thanks 1

How to submit a good bug report
---
FTP server for test scenes and user uploads:

Posted

Wow, thank you for such a comprehensive answer...

Quick question - does the selection of Float vs Double Precision in the project configuration have an impact on this? My project is currently double...

Thanks
 

×
×
  • Create New...