Shaders - Accessing Vertices in Model Space in fragment() instead of vertex()

Godot Version

4.6.beta

Question

I followed a tutorial on YouTube (DevPoodle’s “How to Procedurally Generate Terrain - Using Godot Engine”), which was immensely helpful. I had to do a lot of digging around (learning about tangent space etc.), but I learned just as much if not more from “figuring out” some of the things that were glossed over, so am quite happy.

One element that was covered was how one needed to calculate the model space VERTEX in fragment() to color levels of the terrain differently, because VERTEX in that function represented visual space, like so:

shader_type spatial;

uniform float height;
uniform sampler2D color_gradient;
uniform sampler2D normal_map;

void fragment() {
	vec4 world_vertex = INV_VIEW_MATRIX * vec4(VERTEX, 1.0);
	vec3 model_vertex = (inverse(MODEL_MATRIX) * world_vertex).xyz;
	float gradient_uv = (model_vertex.y / height) + 0.5;

	vec4 world_vertex = INV_VIEW_MATRIX * vec4(VERTEX, 1.0); # visual space -> world space
	vec3 model_vertex = (inverse(MODEL_MATRIX) * world_vertex).xyz; # world space -> model space
	float gradient_uv = (model_vertex.y / height) + 0.5;
	ALBEDO = texture(color_gradient, vec2(gradient_uv)).rgb;
	NORMAL_MAP = texture(normal_map, UV).xyz;
}


And it got me thinking - this is effectively reversing the calculations for VERTEX done between vertex() and fragment(). Can’t you just calculate the gradient in vertex() and use that value in fragment(), like so?

shader_type spatial;

uniform float height;
uniform sampler2D color_gradient;
uniform sampler2D normal_map;

varying float gradient_uv;

void vertex() {
    gradient_uv = (VERTEX.y / height) + 0.5;
}

void fragment() {
    ALBEDO = texture(color_gradient, vec2(gradient_uv, 0.0)).rgb;
    NORMAL_MAP = texture(normal_map, UV).xyz;
}

This seems to work fine, appears much simpler and avoids “undoing” prior calculations to get the gradient. However, I read that while this will generally work, I may run into some problems with the colors because I carried over the gradient from vertex().

I dunno - I’m fairly new to the topic, would one expect this to be problematic? Would there be runtime problems or some kind of inefficiency introduced to do it the second way? It looks so much better, and while I was happy to learn about the different spaces (view, world, model, tangent spaces) from this rabbit hole, I haven’t found solid intel on potential problems with calculating the gradient value in vertex().

Varying is linearly interpolated across the triangle. If the calculation you do in fragment function is linear as well, then there will be no difference and your preference should in fact be to do it in vertex function because there’s no need to waste gpu cycles on calculating it per pixel.

Got it - so if I was doing something non-linear, like the following:

gradient_uv = sin(VERTEX.y * 10.0);

You might expect problems, but for linear solutions like the one in the first post it’s just fine (and, in fact, better)?

Correct!

Perfect, thank you!

1 Like

Btw transforming VERTEX to some other space in the fragment function is always redundant. You can always do it in vertex function and send it via varying. Especially if you need it in object space. Then you can just assign VERTEX to a varying in vertex function and you’re done. Doing two matrix multiplications per pixel for what you can get essentially for free is extremely wasteful.

1 Like