helmut.bressler Posted March 5, 2018 Share Posted March 5, 2018 Hello, I'm working on a c++ plugin which should perform an image composition task at the end of the frame. I need access to the Depth Buffer, but Renderer::getTextureOpacityDepth() and Renderer::getTextureCurrentDepth() always return a NULL pointer (under DirectX11). I have tried that already in different places (render() of the world logic , swap() and gui() methods of the Plugin interface) but I'm always getting NULL. Am I using the Renderer in an inappropriate way? (From what I can tell from the Visual Studio graphics analyzer, both textures (Current Depth and Opacity Depth) are existing and filled correctly) Many thanks and cheers Helmut Link to comment
silent Posted March 6, 2018 Share Posted March 6, 2018 Hi Helmut, Have you checked samples on GBuffer usage? They are located in <SDK>/source/samples/Api/Render: GBufferRead GBufferWrite With a new callbacks you can get all the textures you want and modify them for your needs. Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
helmut.bressler Posted March 6, 2018 Author Share Posted March 6, 2018 Hello Silent, I found it! Works perfectly. Thank you. Cheers Helmut Link to comment
Greg.Mildenhall Posted April 24, 2018 Share Posted April 24, 2018 I changed things up slightly from how GBufferRead works, hoping to avoid the unnecessary copy. Rather than running the "fetch_buffers_post" post process, my gbuffers_ready_callback() contains only: m_depthTexture = renderer->getTextureOpacityDepth(); And I apply this texture to a material in the main render. (my viewport->renderNode(camera, node) call is in render() not update(), but I doubt that makes a difference) It works as intended _only_if_ I force the viewport to use different dimensions to the main render. Otherwise it looks like the main render uses the same depth buffer and writes over it before I can use it. :( How do I make my secondary viewport use a private depth buffer of the same dimension? Enabling a private TextureRender only seems to use it for the final output, not for the depth buffer - even when I set viewport->setRenderMode(Unigine::Viewport::RENDER_DEPTH) Link to comment
silent Posted June 8, 2018 Share Posted June 8, 2018 Hi Greg, Not sure what is happening in your case. Could you please provide a minimal test scene (or modified sample) and results that you want to achieve? Thanks! How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
Greg.Mildenhall Posted June 11, 2018 Share Posted June 11, 2018 I'm pretty sure _I_ know what's happening in my case, as I said: it looks like the main render uses the same depth buffer and writes over it before I can use it. As I said, what I want to achieve is the same as the GBufferRead demo (at least for the depth buffer, I don't need the others) but without the unnecessary copy. Link to comment
andrey-kozlov Posted June 14, 2018 Share Posted June 14, 2018 Hello Greg, There's no safe way to do what you want. Depth texture owned by texture pool and may be reused or even destroyed outside the callback's body. Copy operation is considered relatively cheap on modern videocards and there're a lot of them during rendering sequence. If you find out this is a hot spot in your case we could try to extend API for using textures owned by client's code to avoid copies. 1 Link to comment
Greg.Mildenhall Posted June 15, 2018 Share Posted June 15, 2018 Unfortunately our users don't generally have modern videocards. :( That said, the performance is acceptable in our current use case, It just annoyed me to have to write extra code to copy something I only need to use once. It would still be nice to have an API for client-owned render targets, because we're considering future features which would use this technique multiple times per frame. Link to comment
Recommended Posts