Jump to content

[SOLVED] how to get depth information outof the graphics card


photo

Recommended Posts

Hi

 

We are searching for the best immersive experience. For this we experimented several months with 3D surround and head tracking.

I think we have very interresting result. The most important and difficult to solve problem however is head tracking latency.

We can use a very fast head tracking system, but the graphics system latency is difficult to cope with.

But I may have a solution for this. I want to add some hardware inbetween the graphics card and the displays that should correct the latency

of the graphics system, This hardware needs as extra information the depth information per pixel.

 

Is there a way we can program the shaders so that we not only get the final pixels from the card but also the depth information?

We need this on a pixel by pixel base or line by line (e.g. side by side pixel and depth information).

Can this be done? If so, can this also be done in 3D surround mode?

Link to comment

This sample reconstructs 3D world position from a depth texture. You can use it as a reference for your post-processing shaders.

For stereo, in the last composite post-processing shader deferred_depth_0 and deferred_depth_1 textures need to be used.

00000.jpg

samples.zip

Link to comment

Thanks!

That is very interresting. I will check this sample.

 

But for the idea I have I will need the depth information going to the output of the graphics card.

For instance in the form of a side by side stereo output format with at the left the 2D picture and

at the right the depth information. Can this be done easilly?

Link to comment

Cor,

 

Did not get the part about "depth information going to the output of the graphics card". In this sample you get depth values for a graphics card and that seems to be exactly what you need, as far as I understand.

Link to comment

Did not get the part about "depth information going to the output of the graphics card".

 

"I want to add some hardware inbetween the graphics card and the displays that should correct the latency

of the graphics system, This hardware needs as extra information the depth information per pixel."

 

Based on my understanding he wants to have some (stereo) depth texture download per frame (like a screenshot) for sending this information to his extra piece of hardware for latency compensation. Sounds interesting, but I would doubt that this will be feasible as GPU-to-CPU texture transfer is a performance killer (so this approach most probably would introduce much higher latency problems and FPS render stalls) 

Link to comment

Hi Ulf,

I do not need this information on the CPU. I would like this output on the display connector.

I think of using the stereo output format (side by side output) and generating standard 2D pictures for the left side output and

depth information for the right side output. Then I could make some piece of hardware that is connected to this output and

generate a real stereo picture from this. It will not be a perfect stereo result, but for my goal probably good enough.

This advantage is that stereo does not cost extra GPU or CPU time in this way. But the bigest advantage is that

I can make this piece of hardware also doing some head tracking corrections. This would result in a incredable fast

headtracking response.

Link to comment

Ok, so this would mean: render scene to render buffer "left", copy finally scene depth texture to render buffer "right" on the GPU and output both images as video signals left/right. I would guess that this will require some in-depth modifications of engine render pipeline source render (though all technical parts are more or less available).

 

As this is a quite unusal use-case I don't think UNIGINE will implement this in the standard source codebase ?!?

Link to comment

As this is a quite unusal use-case I don't think UNIGINE will implement this in the standard source codebase ?!?

 

But maybe UNIGINE can tell me if this is possible and tell me how to make this work.

 

The main issues I see:

- How to switch the output to side-by-side 3D output mode, without the unigine engine going into 3D rendering mode.

- How to get the dept information in the correct buffer in the correct format.

Link to comment

Thinking about it once again, maybe there is a quite simple solution for your special use-case (at least for the rendering part, the magic depth processing hardware might be much more complicate to develop, but let's assume you can do it)

  • if your magic hardware device connected to GPU-monitor output 2 can identify itself as a regular display at with the same resolution (e.g. 1024x768) as your monitor connected to GPU-monitor output 1, than this setup could be handled by Windows as a dual-monitor desktop in extended mode (e.g. 2048x768)
  • Simply create a NON-stereo UNIGINE render window covering both screens (e.g. 2048x768)
  • Create a WidgetSpriteViewport only covering left half of the render window (e.g. 1024x768) and render your scene to this viewport. This rendering output should be available in the color and depth texture buffer for final screen post-processing
  • Use a slightly modified version of the above provided post-processing shader for the overall render window which outputs the unchanged color texture buffer for the left window half and the depth values (from the left window half) to the right window half

At least in theorie ;) you should see the rendering on your monitor and your magic hardware box would get the depth image via the second monitor output. Also performance-wise you only would have to render your scene once per frame.

Maybe UNIGINE crew can check this idea for technical correctness ?!? Quite sure frustum could provide an example implementation for such a shader in 2 minutes...

Link to comment

That may be an interresing way to get the result out of the graphics card!

Complicating factor is that I have a setup with 3 monitors. They all need theire own frustum.

They are all 1920x1080. I need pictures and dept information a little bigger then that because of the special processing.

Suppose I need 2300 x 1200 size information, then I would need 4600 x 1200 for each monitor when depth information is included.

I doubt a graphic card can do that. Are the graphics cards and the unigine engine so flexible that it can handle that?

I guess the monitor driver should be cable of handling this too. I can make the special hardware present itselfs to the graphics card

as a monitor of 4600 x 1200. will that do the job?

Link to comment
  • 3 weeks later...

Sounds interesting, but I would doubt that this will be feasible as GPU-to-CPU texture transfer is a performance killer (so this approach most probably would introduce much higher latency problems and FPS render stalls) 

 

In the next SDK update new functions will be added to the C++ API. They include access to shaders, etc; they will also allow a direct access to the GPU texture. Exactly what's needed here.

 

Cor, you can check how side-by-side stereo (fragment_stereo_horizontal.shader) is actually done.There's no need to use WidgetSpriteViewport, it can simply be done in the composite shader. (That's Frustum's opinion, JIC :))

Link to comment
×
×
  • Create New...