sebastian.vesenmayer Posted July 22, 2021 Share Posted July 22, 2021 (edited) When starting our program i have a surface count of over 3300 with all models which is a bit heavy. When I export materials and nodes and open them in the Editor, the surface count is only one 1/3 which is also about the number of all ObjectMeshStatic nodes. I found out that parameters in material like emission and auxiliary and transparency multiply number of surfaces. Why is that? Does the surface count per se lower the perfomance or is it just an information? What parameters in materials can also cause increasing number of surfaces? Version is 2.14.0 Thanks Edited July 22, 2021 by sebastian.vesenmayer Link to comment
silent Posted July 23, 2021 Share Posted July 23, 2021 Hi Sebastian, Each light with shadow will increase number of rendered surfaces as well Number of surfaces are increased also for each pass that surface is being rendered (for example, aux pass will increase number of surfaces), including transparent surfaces. So, basically that's it - additional passes and lights are mostly causing this. Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 23, 2021 Author Share Posted July 23, 2021 Does that mean that for each additional pass an additional draw command will be send to the graphics card? So transparent objects get rendered twice? Link to comment
silent Posted July 23, 2021 Share Posted July 23, 2021 Transparent objects are being rendered to gbuffer initially and additionally for each light in the scene. So in a scene with a single light only transparent object will be rendered at least twice. You can check number of dips in the profiler (Dips counter). Number of dips is not always equal to number of surfaces due to the batching and transparent surfaces can't be batched at all. Also, transparent draw calls are most expensive ones. Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 26, 2021 Author Share Posted July 26, 2021 (edited) Ok, I found out that Particle Systems where the Emitter is not enabled creates 2 dips in addition. It is also issuing draw calls when particle count is zero. This can be a bottleneck when placing many particlesystems, which should still be animated after disabling the emitter. Do I need to manually check every particle system now to disable the node when particle count gets 0? Will this be improved in future versions of the engine? Thanks. Edited July 26, 2021 by sebastian.vesenmayer Link to comment
silent Posted July 26, 2021 Share Posted July 26, 2021 As I previously mentioned, not all DIPs are costs equally, so it may be very cheap ones that you will not be able to notice. Could you please also give us more details about that 2 dips with disabled emitter - how did you measure this and can it be reproduced in the default empty scene inside Editor? Any specific emitter type? Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 26, 2021 Author Share Posted July 26, 2021 (edited) You can reproduce this by adding lots of default particle systems in an empty scene (I used 4000) and disable only the emitter and not the node. This will increase the time of the frame noticeable without rendering anything. Edited July 26, 2021 by sebastian.vesenmayer Link to comment
silent Posted July 26, 2021 Share Posted July 26, 2021 Is that your typical use-case - keeping 4000 particles system at the same location? In that case disabling node itself would be mandatory, because such objects are still being updated every frame. Keeping 4k objects in a single place will simply make spatial tree useless and therefore you will see the performance drops: On this screenshot you can see that particles update has been split on six CPU cores in parallel. In more or less real usage scenarios you would have smaller number of objects rendered per frame. Majority of the objects (that are not inside the viewing frustum) will be discarded by spatial tree at the very beginning of the frame, therefore will not be updated anyway. How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 26, 2021 Author Share Posted July 26, 2021 One usage scenario could be Firefight scenario at the airport when operating at the end of the runway and view shows parts of the apron. Very much vehicles or aircraft can be located there and each aircraft has arround 20 to 30 empty Particle System Nodes. In this case that would be only 133 aircraft which is not very much. Also chase view of an aircraft during arrival/departure can show the whole airport. Link to comment
silent Posted July 26, 2021 Share Posted July 26, 2021 In this case it's better to disable node completely, until it really needed. Microprofile should help to find the bottleneck and eliminate them. How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 26, 2021 Author Share Posted July 26, 2021 We are going to disable them during runtime by checking for the number of particles now. We are also trying to do additional checks on our models to figure out where alpha testing makes sense and where not. Every Node we don't need to process is a win. Speaking of Microprofiler. It is not working anymore when we exceed a number of Particle Systems in the scene. I could reproduce this now in the Editor. Should I send this to you via ftp? Link to comment
silent Posted July 26, 2021 Share Posted July 26, 2021 Even microprofile_dump_html is now working? On very complex scene it maybe a better option to dump html instead of doing a live view in browser. If there is too many information per frame, browsers may fail to load it as well. You can try to reduce number of web frames stored to reduce the workload a bit (microprofile_webserver_frames command). Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
sebastian.vesenmayer Posted July 27, 2021 Author Share Posted July 27, 2021 I have this problem only in firefox with live view and dump when the scene gets more complex, Microsoft Edge works fine. Thanks Link to comment
silent Posted July 27, 2021 Share Posted July 27, 2021 Yeah, Firefox in our tests also handles big html files worse than chromium-based browsers. How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
Recommended Posts