Jump to content

Whole GBuffer for Fisheye


photo

Recommended Posts

Hi Everyone,

While implementing a custom fisheye renderer I had some interesting ideas regarding screen space effects.

Our fisheye rendering works similar to the panoramic rendering of Unigine: We create several images in various directions and stitch them back together.

Here is an image of the Unigine panoramic rendering to give you a feeling of how the process works (I deliberately left SS effects on to make the different images visible):

Fisheye_rendering_with_ss_enabled

Since the different images are created independently, all screen space effects need to be disabled. Leaving them enabled leads to artifacts like in the image above or as discussed here:

This got me thinking:

Instead of just combining the final images, would it be possible to stitch together all GBuffer Textures and then do the lighting on the combined GBuffer?
I think the expensive part of fisheye rendering is creating the different images, not stitching them together so maybe this approach would even increase performance since you only need one lighting pass. However, combining all GBuffer textures might outweight the improvement.
And the biggest impact would be to enable some screen space effects again. I think especially things like tonemapping and bloom could work well, maybe even SSAO. Other effects would not work due to our GBuffer not being "linear" anymore so anything like SSR or SSGI would still not work I think.

This would need quite some fiddeling with the rendering pipeline so before diving into this I wanted to ask Unigine: Is this something you have thought about and decided against? If so, for what reason(s)? And if not, what are your thoughts on this? Are there more problems to be expected, that I'm not thinking about?

And of course: If anyone else has any thoughts on this, feel free to share them here!

Kind regards

Link to comment

Hello,

It doesn't sound like something impossible. But I see a number of pitfalls with such an approach.

First of all stitching gbuffer together may be a little bit harder than it sounds. Fisheye transformation requires resampling with different sampling frequency in different part of image. When you do that transformation with the final color you can apply bilinear filtering or even prepare mipmaps and use them. But unlike the final color gbuffer contains geometry information. That means you can't really apply filtering to it because it would create nonexistent surfaces. But if you don't apply filtering at all (stick to the nearest neighbor) and fisheye distortion is high it will probably create patchy image. So research is required here and the final result may be not satisfying. 

Also normals are in view-space and you probably will need to unpack them, transform to the main view-space and pack them back. Which I think won't be lossless transformation.

The other thing is that you'll need to modify all the lighting shaders. Because lighting effects do transformations between screen and view space. You'll need to find all that transformations and inject your code there. If your fisheye transformation doesn't have closed form or expensive to calculate you probably will have to calculate and store view-space position for every gbuffer sample. And then you'll need to provide that texture to every effects. 
Strong shaders codebase modification does migrating to new engine version much harder, because rendering evolves fast in the engine.

So these are main issues I can see here

  • Like 3
Link to comment
×
×
  • Create New...