Jump to content

[SOLVED] Deffered depth


photo

Recommended Posts

Hi,

How to reconstruct true depth value from depth buffer obtained during rendering shading? 

Assuming that the format of the depth texture is RBA8 and we have the same image obtained from the texture how to get depth value in world cordinates ? 

Ive found the function :

 

float getDeferredDepth(half4 deferred) {
float3 factor = float3(16711680.0f / 8388608.0f,65280.0f / 8388608.0f,255.0f / 8388608.0f);
#ifndef MULTISAMPLE_0
deferred.x -= floor(deferred.x * (510.0f / 256.0f)) * (128.0f / 255.0f);
#endif
float distance = 1.0f - dot(deferred.xyz,factor);
return distance * distance;
}
in the shader but i dont know why it doesnt work when i do the same computations on the CPU.
Should i use integer pixel format or float one ? Can somone briefly explain the sense of these computations?

Thanks
Link to comment

data/samples/shaders/shaders/post/fragment_filter_coordinate.shader file is a good example of shader coordinate reconstruction.

half depth = getDeferredDepth(deferred_depth) * s_depth_range.y;

s_depth_range.y is a far clipping plane.

float getDeferredDepth(half4 deferred) {
    float3 factor = float3(16711680.0f / 8388608.0f,65280.0f / 8388608.0f,255.0f / 8388608.0f);
    #ifndef MULTISAMPLE_0
        deferred.x -= floor(deferred.x * (510.0f / 256.0f)) * (128.0f / 255.0f);
    #endif
    float distance = 1.0f - dot(deferred.xyz,factor);
    return distance * distance;
}

This function perform depth encoding process. The input value should be normalized in 0-1range as GPU does. Just divide the RGBA8 image by 255.0f

The output value is in 0-1 range and must be multiplied by far clipping plane.

  • Like 1
Link to comment
  • 2 weeks later...

Thanks,

I've got depth deffered texture transfered to the image: (included depth.png seems ok)

Then:

p = depth_rsm_img.get2D(w,h);
temp_position.x = static_cast<float>(p.i.r) ;
temp_position.y = static_cast<float>(p.i.g) ;
temp_position.z = static_cast<float>(p.i. B) ;
temp_position /= 255.0f ;
 

I copied getDeferredDepth functionality: (MULTISAMPLE_0 ==true)

(

Vec3 xFactor(16711680.0f / 8388608.0f,65280.0f / 8388608.0f,255.0f / 8388608.0f);

float distance = 1.0f - dot33(temp_position, xFactor);

distance = distance*distance;

)

And just to debug obtained values i put them into the image (color.png)

 

(p.f.r = distance ;
p.f.g = distance ;
p.f.b = distance ;
p.i.a = 255; 
color_rsm_img.set2D(w,h,p) ;)
 

But the obtained results don't seem to be correct depth values. (they aren't changing smoothly- look color.png). Any idea why? (ive tried also to manually revert function setDeferredDepth without really improving results)

 

 

As for scaling by s_depth_range.y obtained depth values. s_depth_range.y is equal to far_clip_plane obtained from projection matrix : 

float near_clip_plane = 1.0f;

float far_clip_plane = 1.0f;
decomposeProjection(projection,near_clip_plane,far_clip_plane);
Shouldn't scaling be like that :
near_clip_plane + distance * (far_clip_plane - near_clip_plane); instead of  distance*far_clip_plane ?
Either way it doesn't work, probably due to error of depth reconstruction.
 
 
 

 

 

post-1198-0-50257600-1389108004_thumb.png

post-1198-0-07324400-1389108060_thumb.png

Link to comment

color.png is probably incorrect becaus of wrong texture format. Still the question is if 

near_clip_plane + distance * (far_clip_plane - near_clip_plane); instead of  distance*far_clip_plane should be the correct value of depth. First one seems more logical to me. Anyway i haven't resolved the issue yet.

 

Link to comment
  • 1 month later...

Hi Maciek,

 

You're absolutely right about near_clip_plane + distance * (far_clip_plane - near_clip_plane) formula. By the way, if you take a closer look at core/shaders/default folder you'll notice that in most of cases world position is not needed at all.

 

There is only one place where it restores world position according to formula distance * far_clip_plane — decals. If you look at its vertex shader (core/shaders/default/decal/vertex_deferred.shader), you'll notice that it divides screen space vertex by far plane and then multiplies it in the fragment shader so it become screen space coord again. I think division is used here for better precision during interpolation (please correct me if I'm wrong).

Link to comment
×
×
  • Create New...