Jump to content

Depth testing in post shader precision


photo

Recommended Posts

Hi all,

 

I'm doing a post processing effect where I need to do dept testing so that effect is not applied infront of objects that are very close to the camera.

 

In the fragment shader I'm using 'deferred_depth' texture and 'getDeferredDepth()' function to get the depth value.

 

This works fine for smaller far/near (near: 1, far: 100). However I need a bgger value for far clipping plane and it seems the precision I get from 'deferred_depth' is pretty low.

 

How can I get better precision?

  Is z-buffer available as a texture?

  Can I enable depth test in the post shader (set SV_DEPTH)?

 

Thanks in advance

Link to comment

Hi,

 

I'm doing a post processing effect where I need to do dept testing so that effect is not applied infront of objects that are very close to the camera.

 

Could you please give us more information about the effect you are trying to achieve?

 

Have you tried to use render_use_d32f to increase z-buffer precision. Also, please check fragment_filter_reflection.shader on line 83: 

half depth = getDeferredDepth(texture2DDeferredNorm(s_texture_1,s_texcoord_0.xy)) * s_depth_range.y;

Is z-buffer available as a texture?

Could you please explain in more details for what purposes this texture will be used?

 

Can I enable depth test in the post shader (set SV_DEPTH)?

Also, it is not absolutely clear what are you meaning by depth test here. Could you please give us more details about it?

 

Thanks!

How to submit a good bug report
---
FTP server for test scenes and user uploads:

Link to comment
Could you please give us more information about the effect you are trying to achieve?

 

 

I'm building some effects on the windscreen. I need to mask out objects in between camera and the windscreen.

 

half depth = getDeferredDepth(texture2DDeferredNorm(s_texture_1,s_texcoord_0.xy)) * s_depth_range.y;

 

 

What does 's_depth_range' contain?

 

I tried using,

 

half depth = getDeferredDepth(texture2DDeferredNorm(s_texture_1,s_texcoord_0.xy));

 

However this does not make a difference to what I used before

 

float depth = getDeferredDepth(s_texture_1.Sample(s_sampler_1,IN.texcoord_0));

 

If I understand correctly this only has 8bit precision.

 

Is there any other depth information with higher precision that can be used for depth testing in a post shader?

 

Thanks

Link to comment

Hi,
 

If I understand correctly this only has 8bit precision.


Yes, all deferred buffers have RGBA8 format.

 

Unfortuantely, from your description is not completely clear what shader should actually do. Do you have any screenshots or more detailed description of what effect you are trying to achieve? Maybe a shader code will help us to understad this issue better.

 

Thanks!

How to submit a good bug report
---
FTP server for test scenes and user uploads:

Link to comment

I'm doing a post processing effect for a windscreen.

I project that on to a virtual windscreen.

Those projected values come into the fragment shader in 'windscreen_position'.

All I need to achieve is not to draw this effect on surfaces closer that the above mentioned virtual windscreen.

I'm certain my projections are correct because if I reduce near and far values of the player this effect works as expected.

Deferred depth buffer precision is not enough for near far values we use (1-10000).

 

Is there any other source of depth (z-buffer) than deferred depth texture?

or

Can I enable depth test (SV_DEPTH) in a post shader?

 

I hope following code helps,

float4 refracted_color; /* this is the color from post processing effect */

float   scene_depth      = getDeferredDepth(texture2DDeferredNorm(s_texture_1, s_sampler_1, IN.texcoord_0)); /* this is expected to be the normalized depth value in the z buffer */
float   windscreen_depth = IN.windscreen_position.z; /* this is the normalized depth of the windscreen */
 
if(windscreen_depth < scene_depth) 
    return refracted_color 
else
    return s_texture_0.Sample(s_sampler_0, IN.texcoord_0);









Thanks

Link to comment

Hi,

 

Is there any other source of depth (z-buffer) than deferred depth texture?

I'm afraid not.

 

Also, it is not completely clear what real world effect you are trying to achieve, sorry.

 

Could you please give us more detailed description (with screenshots, if possibe). Maybe there is another solution available, but we can't give you any advices, because we can't understand right now what exactly effect you are trying to achieve.

 

Thanks!

How to submit a good bug report
---
FTP server for test scenes and user uploads:

Link to comment
  • 3 weeks later...

Hello Namal,

 

 

float scene_depth = getDeferredDepth(texture2DDeferredNorm(s_texture_1, s_sampler_1, IN.texcoord_0)); /* this is expected to be the normalized depth value in the z buffer */

 

I'm not sure what you mean with 'normalize depth values' but under the assumption you mean 0 should be the near clipping plane and 1 the far clipping plane then its not the normalized depth values. In the example \data\samples\shaders\shaders\post\fragment_filter_coordinate.shader

	half4 deferred_depth = texture2DDeferredNorm(s_texture_0,s_sampler_0,IN.texcoord_0);
	half depth = getDeferredDepth(deferred_depth) * s_depth_range.y;
	
	half3 direction = normalize(IN.texcoord_1);
	
	float3 world_position = mul4(s_imodelview,direction * depth);

the variable 'depth' should be in view space if I have understood it correctly, so for your code that would mean that

float depthView = scene_depth * s_depth_range.y 

is the depth in view space.

 

At least that is what the example suggests.

 

Cheers

 Helmut

Link to comment

Hi Helmut, thanks for the clear explaination.

 

I initially tried to do depth testing in the normalized depth range [0, 1]. But as you proposed its possible to do it in view space as well. I would like to give it a try but I'm doubtful whether the diferred buffer 8 bit precision is sufficient. Will update the reuslts when I try it.

Link to comment

Sorry Namal, I just discovered that I was not 100% correct in my previous posting. It seems that the deferred depth buffer stores the distance to the camera rather than the .z value in view space.

 

getDeferredDepth(deferred_depth) * s_depth_range.y

 

gives you the distance to the camera. In order to calculate the .z value in view space you need the view direction:

half4 deferred_depth = texture2DDeferredNorm(s_texture_0,s_texcoord_0.xy);
half depth = getDeferredDepth(deferred_depth) * s_depth_range.y;
	
half3 viewDirection = normalize(s_texcoord_1.xyz);

float viewSpaceZ = viewDirection.z * depth;

In case you are implementing a post process shader I would recommend to use the \data\samples\shaders\shaders\post\vertex_filter_coordinate.shader vertex shader as a guideline. It puts the normalized screen texture coordinates into TEXCOORD0 and the view direction into TEXCOORD1 (btw. the camera is looking into -z, so veiwSpaceZ would be always < 0 in the example above).

 

Regarding precision: I have observed that it is precise enough to use it as replacement for the depth test in the near distance. I have a far clipping distance of 60000 units, I would guess it works well for about the first 30 units. Maybe that is enough for your windscreen.

 

cheers

 Helmut

Link to comment
×
×
  • Create New...