Good morning. I need help understanding a behavior related to shaders. The one in the picture is a procedural planet. For aesthetic reasons, I would like to “compress” the height of the mountains based on the observer’s distance, meaning flatten them completely at great distances and display them with their actual height at ground level. This is because the planet is always too small compared to reality… Something didn’t seem right, so I ran a test. The shader in the image is clearly entirely useless, but it serves as a test to understand a behavior. By normalizing the vertex, I expect a unit vector that, when multiplied by a fixed number, should transform the planet into a perfect sphere, right? Instead, the planet looks exactly as built at the mesh level… But am I doing something wrong? Could someone help me understand? Thanks.
What does the wireframe view look like? I would assume some sides of the sphere have more vertecies than others. Keep in mind "compress"ing the height through a shader won’t reduce the GPU workload in fact it would be slower to render, Godot does have built-in LODs that you want to use and might be butting heads with for this issue.
Adding to what gertkeno said, the vertex function works as expected for me. I tried it on a tesselated cube and got a perfectly round sphere when looking in wireframe mode.
To get the shading right, I had to set the face normals in the fragment function. This makes them point straight outward:
Hi, and thanks for yours reply. I’m a bit embarrassed to admit it… Actually, it works perfectly. I just didn’t expect the shadows to be generated exactly as if the points were in their original position, so I got misled! I realized this when I saw the planet’s mesh in wireframe, as gertkeno suggested.
I’m attaching two images to show the final effect with and without the compression applied by the shader. You can see it on the moon in the background.