Coppel.Damien Posted June 11, 2019 Share Posted June 11, 2019 Hi everyone, We are building an application that follows this architecture : - One application with a GUI running on a computer - One instance of Unigine running on another computer. Is there any pre-built unigine functionality that would allow to stream (almost in real time) the video over a network from the Unigine instance to the computer with the interface ? The computer running the application would only send user input, such as moving a character, rotating... and the Unigine instance would send back its video stream over the network, it would work in a way similar to game streaming services such as onlive, shadow ... Link to comment
morbid Posted June 12, 2019 Share Posted June 12, 2019 Hello Damien, We provide generic network functionality with Unigine::Socket Class. I'm afraid there's no built-in streaming solution. More likely it will be easier to do with 3rd party solutions. I'll try to get hints from our engine team to help you find a better direction. Thanks. How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
morbid Posted June 14, 2019 Share Posted June 14, 2019 Hi Damien, Have thought of CUDA as a solution? We provide a sample (Samples -> 3rd party -> NVidia). You can copy textures and then compress them to videostream. How to submit a good bug report --- FTP server for test scenes and user uploads: ftp://files.unigine.com user: upload password: 6xYkd6vLYWjpW6SN Link to comment
Coppel.Damien Posted June 24, 2019 Author Share Posted June 24, 2019 (edited) Hi, Thanks for the answer, you were right, CUDA did the trick. Thanks to the sample you mentioned I was able to use CUDA functions to copy the textures from my Unigine App to another application that handles the streaming part, by using only GPU memory. If anyone is interested in doing the same thing, note that you can't directly share the address of your GPU memory from a process to another, you have to use a built-in CUDA structure named "cudaIpcMemHandle", you can find how to properly use it in NVIDIA's CUDA documentation. Edited June 24, 2019 by Coppel.Damien 2 Link to comment
Recommended Posts