christian.wolf2 Posted March 24, 2021 Share Posted March 24, 2021 Hi, I am currently trying my best to work with shaders but have a little bit of problems understanding the right direction I have to move. I have an RGBA32F-Texture which should be defined via an material and access their data during the vertex shader pass. So far no problems with other textures. There I can access the data via INIT_TEXTURE(0,PROCEDURAL_TEXTURE) ... MAIN_BEGIN(VERTEX_IN,VERTEX_OUT) int2 resolution = textureResolution(PROCEDURAL_TEXTURE); float4 coordinate = float4(20.f,20.f,0.f,0.f); //just a sample to access the pixel at 20;20 float4 pixelColor = TEXTURE(ANIM_TEXTURE,coordinate.xy); MAIN_END But it seems the access for color information for RGBA32F are totally different. What I have tried: #define USE_RW_TEXTURES INIT_RW_TEXTURE_R32F(0,PROCEDURAL_TEXTURE) ... MAIN_BEGIN(VERTEX_IN,VERTEX_OUT) int2 texture_dimenstion = textureResolution(PROCEDURAL_TEXTURE); //doesn't seem to work for RGBA32? int2 sample_coordinate = int2(20,20); float4 pixel_color = TEXTURE_RW_LOAD_R32F(PROCEDURAL_TEXTURE,sample_coordinate.xy); float4 coordinate = float4(IN_ATTRIBUTE(0).xyz,1.f); coordinate.x = coordinate.x + pixel_color.x; MAIN_END Unfortunately the shader won't compile when I am adding the lines where I access the pixel_color data (coordinate.x = coordinate.x + pixel_color.x). Also, the editor console doesn't have any further information than "Failed to create vertex shader". As far as I understand from this thread (via GoogleTranslator): the function TEXTURE_RW_LOAD_RW32F returns just a float value. Is this correct? How do I get the appropriate pixel data from my texture otherwise? Best Christian Link to comment
sweetluna Posted March 24, 2021 Share Posted March 24, 2021 Hi Christian, RW Textures are used only when you need to Read or Write to specific pixel of the textures and you have to use render target to bind a texture using bindUnorderedAccessTexture. Quote But it seems the access for color information for RGBA32F are totally different. I didn't catch that. You should be able to use simple TEXTURE() in order to retrieve float4. Yes, TEXTURE_RW_LOAD_RW32F returns a float. Also you can use RenderDoc in order to debug shaders. May RenderDoc/Nsight Graphics/Intel GPA bless you Link to comment
christian.wolf2 Posted March 25, 2021 Author Share Posted March 25, 2021 (edited) Hi sweetluna, thanks for your answer. Just for better understanding, do I need to user RW Textures only when I want to Read AND Write to the same texture in the shader or for Read OR Write of RGBA32 textures. For the TEXTURE() sample, the shader won't be build in the following scenario: INIT_TEXTURE(1,ANIM_TEXTURE) MAIN_BEGIN(VERTEX_OUT,VERTEX_IN) // Get transform with scale and rotation (without translation) float4x4 transform = s_instance_transforms[IN_INSTANCE]; //int2 texture_dimension = textureResolution(TEXTURE(tex_running)); int2 texture_resolution = textureResolution(ANIM_TEXTURE); float4 row_0 = getRow(transform, 0); float4 row_1 = getRow(transform, 1); float4 row_2 = getRow(transform, 2); // Perform Modelview-space transform float4 in_vertex = float4(IN_ATTRIBUTE(0).xyz,1.0f); float4 vertexAnimPos = TEXTURE(ANIM_TEXTURE,int2(10,10)); in_vertex.x = in_vertex.x + vertexAnimPos.x; in_vertex.y = in_vertex.y + vertexAnimPos.y; in_vertex.z = in_vertex.z + vertexAnimPos.z; float4 position = mul4(row_0,row_1,row_2,in_vertex); // Set output UV float4 texcoord = IN_ATTRIBUTE(1); OUT_DATA(0) = texcoord; // Set output position OUT_POSITION = getPosition(position); MAIN_END The materials_rebuild in editor creates an error with the following line: Failed to compile vertex shader: shaders/vertex/procedural_base.vert Compilation log: (3238,25-90): error X4532: cannot map expression to vs_5_0 instruction set hlsl 3238: float4 vertexAnimPos = s_texture_ANIM_TEXTURE.Sample(s_sampler_ANIM_TEXTURE, int2(10,10)).rgba; Material::create_shader(): can't create shaders pass:"deferred" material:"procedural_base" But if I comment the lines where I set the in_vertex.x/y/z variables, so only the TEXTURE() line is in, the shader builds fine. So that's why I thought there are something special with RGBA32F-textures. Best Christian Edited March 25, 2021 by christian.wolf2 Link to comment
sweetluna Posted March 25, 2021 Share Posted March 25, 2021 Hi Christian, You are trying to sample texture in vertex shader, but derivatives aren't available in vertex shaders, so you have to specify texture's mip level manually. Instead of using TEXTURE(texture, uv), use TEXTURE_BIAS(texture, uv, mip). Regarding RW textures: use them when you need to write pixel in specified position. RW Texture can be read and written at any given time, but they are available only in pixel and compute shaders. And I need to mention that in pixel shaders RW resources compete with Render Targets. This means that you can't have 8 RW resources and 8 Render Targets (RTV), you can only have 8 in sum of both. Sincerely 1 May RenderDoc/Nsight Graphics/Intel GPA bless you Link to comment
christian.wolf2 Posted March 25, 2021 Author Share Posted March 25, 2021 Hi sweetluna, many thanks for the clarifications. For my purpose I doesn't need to write to the texture, so sticking with TEXTURE_BIAS should be suitable. So with this information the problem seems to be in a different place for my site. So just another question. From the example code for the deferred shader: // describing the read-only custom mesh material to be used for static meshes, // setting prefixes to be used in shaders to refer to textures and parameters BaseMaterial custom_mesh_material <node=ObjectMeshStatic editable=false var_prefix=var texture_prefix=tex default=true> { // enabling the deferred pass for our material and hiding it (will be invisible in the UnigineEditor) State deferred=1 <internal=true> // describing textures and parameters Group "Base" { Texture2D albedo="core/textures/common/grain.dds" <unit=0 pass=[deferred] tooltip="Albedo texture"> Texture2D displacement="core/textures/common/white.dds" <unit=1 pass=[deferred] tooltip="Displacement map"> Slider displacement_scale=0 <min=-0.5 max=0.5 tooltip="Displacement scale"> } ////////////////////////////////////////////////////////////////////////// // Passes ////////////////////////////////////////////////////////////////////////// // describing the deferred pass with links to shaders to be used Pass deferred <defines="BASE_DEFERRED"> { Vertex = "shaders/vertex/deferred.vert" Fragment = "shaders/fragment/deferred.frag" } // describing bindings for node types to which the material is to be applicable Bind ObjectMeshStatic=ObjectMeshDynamic Bind ObjectMeshStatic=ObjectMeshSkinned } The <unit=X> integer is the same as for the vertex shaders INIT_TEXTURE(X,#Name) slot? Otherwise how do I access those textures from the materials setup? I also tried to use "tex_displacement" as input into my shader functions (like TEXTURE_BIAS(tex_displacement,coord,bias) but gives me an error as well. And for advance, if I add the texture via code, is material->setTexture(i,texture) the first parameter the same as for the texture slot? Best Christian Link to comment
sweetluna Posted March 25, 2021 Share Posted March 25, 2021 Hi Christian, Texture slots are used only to tell GPU where to search for an specific texture and they are not connected to the Material::setTexture() function. Quote The <unit=X> integer is the same as for the vertex shaders INIT_TEXTURE(X,#Name) slot? yes, they are the same. Quote I also tried to use "tex_displacement" as input into my shader functions (like TEXTURE_BIAS(tex_displacement,coord,bias) but gives me an error as well What's the error? Sincerely 1 May RenderDoc/Nsight Graphics/Intel GPA bless you Link to comment
christian.wolf2 Posted March 25, 2021 Author Share Posted March 25, 2021 Okay, thanks again for clarification and a BIG SORRY from my side. I have used the line Texture2D procedural="core/textures/common/white.dds" <unit=1 pass=[deffered] tooltip="Simple procedural texture"> without any issues. I figured out, I misspelled "deferred", so the texture couldn't be found in any further file but the log doesn't complain. But thanks for other clarifications, now everything works as expected! Best Christian 1 Link to comment
christian.wolf2 Posted May 31, 2021 Author Share Posted May 31, 2021 I just want to use this thread again because my "new" problem is already related with that one. So hope this is not a problem. During the last days I wrap my head around transfering some additional information from code via textures to my shader code. Basically, I want to bake some vertex/bone information into a texture and try to access them in the vertex shader function to make some further calculations. As for storing the information into a texture, I don't really have any issue when using this code: Spoiler currentFrame = 0.f; for (int i = 0; i < totalFrames; ++i) { float curFrame = animatedMesh->setFrame(0, currentFrame,0,totalFrames); for (int j = 0; j < vertices; ++j) { const vec3 vertexPosition = animatedMesh->getSkinnedVertex(j, 0); const vec3& skinnedVertexPosition = vertexPosition * scalingVector + vec3(0.5f); //wrap my vertex coordinate between 0 and 1 Image::Pixel vertexPixel; vertexPixel.f.r = skinnedVertexPosition.x; vertexPixel.f.g = skinnedVertexPosition.y; vertexPixel.f.b = skinnedVertexPosition.z; vertexPixel.f.a = 1.f; instancedImage->set2D(j, i, vertexPixel); } currentFrame += precision; } //save the image to the folder instancedImage->save("animated.dds"); No big deal so far. When trying to access it for debug reasons on CPU side I am using again the ImagePtr::get()-function. Spoiler for (int i = 0; i < instancedMesh->getNumVertex(0); ++i) { vec4 vertexPos = instancedImage->get(ivec2(i, currentFrame), 0); vertexPos = (vertexPos - vec4(0.5f)) * scale(3.f); Visualizer::renderPoint3D(Vec3(vertexPos.x, vertexPos.y, vertexPos.z), 0.01f, vec4_one); } Still no issue and always getting the expected results. But transfering this peace of code to my vertex shader gives me some weird results. This is what I have tried so far: Spoiler MAIN_BEGIN(VERTEX_OUT,VERTEX_IN) // Get transform with scale and rotation (without translation) float4x4 transform = s_instance_transforms[IN_INSTANCE]; float4 row_0 = getRow(transform, 0); float4 row_1 = getRow(transform, 1); float4 row_2 = getRow(transform, 2); float numberOfFrames = 25.0; //TODO: transfer to either property or texture float clampedTime = var_uniform_time - (floor(var_uniform_time / numberOfFrames) * numberOfFrames); //clamped time between 0 and @numberOfFrames float currentFrame = floor(clampedTime); int2 texture_dimension = textureResolution(tex_running); float4 in_vertex = float4(IN_ATTRIBUTE(0).xyz,1.0f); float2 uvpos = float2(float(IN_VERTEX_ID)/texture_dimension.x,currentFrame / texture_dimension.y); //get the texture values based on vertex ID and current animation frame float4 vertexAnimPos = TEXTURE_BIAS_ZERO(tex_running,uvpos); in_vertex.x = vertexAnimPos.x; in_vertex.y = vertexAnimPos.y; in_vertex.z = vertexAnimPos.z; // Set output UV float4 texcoord = IN_ATTRIBUTE(1); OUT_DATA(0) = texcoord; // Define tangent basis float3 tangent,binormal,normal; // Get normal in object-space getTangentBasis(IN_ATTRIBUTE(2), tangent, binormal, normal); // Transform object-space TBN into camera-space TBN normal = normalize(mul3(row_0,row_1,row_2,normal)); tangent = normalize(mul3(row_0,row_1,row_2,tangent)); binormal = normalize(mul3(row_0,row_1,row_2,binormal)); // Set output TBN matrix OUT_DATA(1) = float3(tangent.x, binormal.x, normal.x); OUT_DATA(2) = float3(tangent.y, binormal.y, normal.y); OUT_DATA(3) = float3(tangent.z, binormal.z, normal.z); OUT_DATA(4) = float3(IN_ATTRIBUTE(4));//per_instance_data[IN_INSTANCE].color.rgb; // Perform Modelview-space transform and set output position float4 position = mul4(row_0,row_1,row_2,in_vertex); OUT_POSITION = getPosition(position); MAIN_END As sweetluna suggested in one of the last comments, I debug the shader with RenderDoc and can already see, that the line float4 vertexAnimPos = TEXTURE_BIAS_ZERO(tex_running,uvpos); returns wrong results, no matter which texel position I want to access. The image is in RGBA32F-format, but that shouldn't be a real problem like mentioned in the above posts. So what am I missing? Thanks in advance Link to comment
sweetluna Posted June 1, 2021 Share Posted June 1, 2021 Hi, could you please share your definition of texture in material? Also, have you checked the SRV in RenderDoc for that specific texture, is it defined there as expected? May RenderDoc/Nsight Graphics/Intel GPA bless you Link to comment
christian.wolf2 Posted June 1, 2021 Author Share Posted June 1, 2021 Hi, for sure: Spoiler // describing the read-only custom mesh material to be used for static meshes, // setting prefixes to be used in shaders to refer to textures and parameters BaseMaterial instanced_animation_base <node=ObjectMeshStatic editable=false var_prefix=var texture_prefix=tex default=true> { // enabling the deferred pass for our material and hiding it (will be invisible in the UnigineEditor) State deferred=1 <internal=true> // describing textures and parameters Group "Base" { Float uniform_time=time <expression=1 hidden=1> //time-variable that grows during game time. Hidden for internal purposes Texture2D albedo="core/textures/common/grain.dds" <unit=0 pass=[deferred] tooltip="Albedo texture"> } Group "Animation" { Texture2D running="core/textures/common/white.dds" <unit=1 pass=[deferred] tooltip="Running animation baked into texture"> } ////////////////////////////////////////////////////////////////////////// // Passes ////////////////////////////////////////////////////////////////////////// // describing the deferred pass with links to shaders to be used Pass deferred <defines="BASE_DEFERRED"> { Vertex = "shaders/vertex/instanced_animation_base.vert" Fragment = "shaders/fragment/instanced_animation_base.frag" } // describing bindings for node types to which the material is to be applicable Bind ObjectMeshStatic=ObjectMeshDynamic Bind ObjectMeshStatic=ObjectMeshSkinned Bind ObjectMeshStatic=ObjectExtern } Nothing special so far. Same with SRV, the correct textures is loaded and can bee seen during vertex debug in RenderDoc. What I have noticed is, that if I add float2(0.5f,0.5f) to the UV pos, it seems to work now. What I have learned from Direct3D-documentation is, (https://docs.microsoft.com/en-us/windows/win32/direct3d9/directly-mapping-texels-to-pixels) that I need to do it for exact texel-to-pixel conversion. Can you maybe confirm that this is the case? Link to comment
Recommended Posts