Jump to content

[SOLVED] How to use Renderer::getTextureOpacityDepth()


photo

Recommended Posts

Hello,

I'm working on a c++ plugin which should perform an image composition task at the end of the frame.

I need access to the Depth Buffer, but Renderer::getTextureOpacityDepth() and Renderer::getTextureCurrentDepth() always return a NULL pointer (under DirectX11).

I have tried that already in different places (render() of the world logic , swap() and gui() methods of the Plugin interface) but I'm always getting NULL.

Am I using the Renderer in an inappropriate way?

(From what I can tell from the Visual Studio graphics analyzer, both textures (Current Depth and Opacity Depth) are existing and filled correctly)

Many thanks and cheers

Helmut

 

 

vsga_opacity_depth.png

Link to comment
  • silent changed the title to [SOLVED] How to use Renderer::getTextureOpacityDepth()
  • 1 month later...

I changed things up slightly from how GBufferRead works, hoping to avoid the unnecessary copy. Rather than running the "fetch_buffers_post" post process, my gbuffers_ready_callback() contains only: m_depthTexture = renderer->getTextureOpacityDepth();

And I apply this texture to a material in the main render. (my viewport->renderNode(camera, node) call is in render() not update(), but I doubt that makes a difference)

It works as intended _only_if_ I force the viewport to use different dimensions to the main render. Otherwise it looks like the main render uses the same depth buffer and writes over it before I can use it. :( How do I make my secondary viewport use a private depth buffer of the same dimension? Enabling a private TextureRender only seems to use it for the final output, not for the depth buffer - even when I set viewport->setRenderMode(Unigine::Viewport::RENDER_DEPTH)

Link to comment
  • 1 month later...

I'm pretty sure _I_ know what's happening in my case, as I said: it looks like the main render uses the same depth buffer and writes over it before I can use it.

As I said, what I want to achieve is the same as the GBufferRead demo (at least for the depth buffer, I don't need the others) but without the unnecessary copy.
 

Link to comment

Hello Greg,

There's no safe way to do what you want. Depth texture owned by texture pool and may be reused or even destroyed outside the callback's body. Copy operation is considered relatively cheap on modern videocards and there're a lot of them during rendering sequence.
If you find out this is a hot spot in your case we could try to extend API for using textures owned by client's code to avoid copies.

  • Like 1
Link to comment

Unfortunately our users don't generally have modern videocards. :( That said, the performance is acceptable in our current use case, It just annoyed me to have to write extra code to copy something I only need to use once. It would still be nice to have an API for client-owned render targets, because we're considering future features which would use this technique multiple times per frame.

Link to comment
×
×
  • Create New...