Linear depth not linear

Godot Version

4.4.1

Question

I want to raycast to objects from the camera using depth texture.
I use a modified code from the advance post-processing doc to display the linear depth from a secondary camera.

shader_type spatial;
// Prevent the quad from being affected by lighting and fog. This also improves performance.
render_mode unshaded, fog_disabled, blend_mix;

uniform sampler2D depth_texture : hint_depth_texture;

void vertex() {
  POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}

void fragment() {
	if ((int(CAMERA_VISIBLE_LAYERS) & (1 << 1)) == 0) {
		discard;
	}

	
  float depth = texture(depth_texture, SCREEN_UV).x;
  vec3 ndc = vec3(SCREEN_UV * 2.0 - 1.0, depth);
  vec4 view = INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  view.xyz /= view.w;
  float linear_depth = -view.z;

  vec4 world = INV_VIEW_MATRIX * INV_PROJECTION_MATRIX * vec4(ndc, 1.0);
  vec3 world_position = world.xyz / world.w;

  // Visualize linear depth
  ALBEDO.rgb = vec3(linear_depth/100.0);

  // Visualize world coordinates
  //ALBEDO.rgb = fract(world_position).xyz;
}

Then In GDScript I have this get_depth() function to get the depth from the texture.

extends SubViewport
@onready var camera_3d: Camera3D = $Camera3D

func get_depth(screen_pos:Vector2):
	size = get_parent().get_viewport().size
	render_target_update_mode = SubViewport.UPDATE_ONCE
	var depth = get_texture().get_image().get_pixel(screen_pos.x,screen_pos.y).r*100.0
	return depth
	
# Called every frame. 'delta' is the elapsed time since the previous frame.
func _process(delta: float) -> void:
	camera_3d.global_transform = get_tree().root.get_viewport().get_camera_3d().global_transform

But it didn’t get the right result. (Blue means it’s behind other objects.)

Since it’s linear depth, it’s supposed to work. Any idea why it doesn’t work?

Edit: I tried checking the depth every 1 meters away, and it looks like the depth is for some reason not even increasing by one, or not even linear:
0.0
9.80392172932625
15.2941182255745
18.823529779911
21.9607844948769
24.705882370472
27.0588248968124
29.4117659330368
31.3725501298904

In this:

[quote=“picode, post:1, topic:114794”]

func get_depth(screen_pos:Vector2):
	size = get_parent().get_viewport().size
	render_target_update_mode = SubViewport.UPDATE_ONCE
	var depth = get_texture().get_image().get_pixel(screen_pos.x,screen_pos.y).r*100.0
	return depth

Why the * 100.0? Is that to compensate for the shader doing linear_depth / 100.0?

It seems like this code is trying to take the depth range and jam duplicates of it into all of the albedo channels, but if linear_depth winds up being outside the range of 0.0 .. 1.0 this could truncate or quantize in any number of fun ways, depending on how the channel data for albedo is stored.

Yes.

but if linear_depth winds up being outside the range of 0.0 .. 1.0

You mean linear_depth/100.0?

this could truncate or quantize in any number of fun ways

But in my example, the linear_depth/100.0 is not even over 1.0 yet.

I did mean linear_depth / 100.0; the potential problem I’m seeing is, what happens if linear_depth is (say) 3000.0? Unless you have the far plane set at 100.0, you have the potential to get higher depth values, and they’ll be truncated/clamped/quantized within the shader at the point where you assign them to ALBEDO.

Then I could probably just divide it by 3000. The thing is, even when the distance is under 100, it’s already not giving correct values.

You’re calculating world_position in the shader. If you assign the .xyz values of that to the .rgb values of ALBEDO (ideally normalizing for your world size), do you get the positions you’d expect?

Your depth texture only contains float x coords between 0. and 1, which correspond to your near and far plane, dividing those by anything before multiplying them by the right numbers is just wrong.

I did this in the shader:
ALBEDO.rgb = vec3(world_position/100.0);

And this in the GDScript:

	var depth = get_texture().get_image().get_pixel(screen_pos.x,screen_pos.y)*100.0
	return Vector3(depth.r,depth.g,depth.b).distance_to(camera_3d.global_position)

The output is different, but still not right:
0.0
7.51224851608276
7.59329128265381
7.80270957946777
8.13058853149414
8.56333351135254
9.08597087860107
9.68395900726318
10.34423828125
11.0556535720825
11.8089656829834
12.596661567688
13.4126834869385
14.2521667480469
15.1112031936646
15.9866399765015
16.7055625915527

That’s not what I did, I divide the linear_depth, which is initially supposed to be not between 0 and 1. It was divided by 100 so that it can be stored in the Albedo channel. Then In GDScript, I multiplied it back by 100 to get the real depth back.

I’d suggest looking at the raw value of depth (or maybe depth * 100.0) and see if the world coords you’re getting out of the shader appear to make any sense.

What… just raw depth, also not decreasing linearly:
0.0
0.92549020051956
0.92549020051956
0.70588237047195
0.59607845544815
0.52549022436142
0.4745098054409
0.43921568989754
0.40784314274788
0.38431373238564
0.3647058904171
0.34509804844856
0.32941177487373
0.31764706969261
0.30588236451149
0.29411765933037
0.28235295414925
0.27450981736183
0.26666668057442
0.258823543787
0.25098040699959
0.24705882370472
0.2392156869173
0.2352941185236
0.22745098173618
0.22352941334248
0.21960784494877
0.21568627655506

What I mean is, you’ve got (or had) world_position writing into ALBEDO. I’m suggesting you print out the world positions (as opposed to the depth) and see if those world positions make any sense.

The math is clearly going wrong somewhere, so the trick is to isolate where. Just looking at the final result of all the equations is debugging math in hard mode.

1 Like

Ok, here’s the result, it’s the same a couple of times, then it changed for some reason.
(0.0, 0.0, 0.0)
(15) (0.0, 8.235294, 1.176471)
(24) (0.0, 7.843138, 1.176471)

But, like I said before, the raw depth is also not linear, that looks like the problem.

It might be possible that the world environment is making the image looks different.

I’m pretty sure the raw depth isn’t linear if you’re doing stuff with the .w component, but in theory running everything back through the inverse transforms ought to spit out the world positions that were fed in originally (with some amount of float error).

1 Like

Actually, dividing the world_postiton by 1000 instead of 100 before store it into the albedo channel, did give only one result.

Edit: I think the world position is not right, because the output position is (0.0, 9.803922, 1.960784), but there’s not even anything at that position.

Well, that’s a thread to pull on, at least.

Setting the albedo to the world position without dividing it kind of makes the position better.
ALBEDO.rgb = world_position;

But even when playing around in the position range of 0 to 1 it still didn’t position properly, there’s a bit of offset to where it needs to be, and I think the position scale is also not correct, so when moving around it like get left behind a bit.

The yellow is where it needs to be, the white is where it is.

I suppose you could scale it by a fudge factor to “fix” it, but it would be good to understand what’s messing it up.

Maybe feed in a grid of points and see if there’s a pattern to the output?

You know what, I will just continue tomorrow. And I might just ask the godot devs.

Thanks for the helping, tho.

I figured it out. After multiple attempt of discussion with AI, it turns out that all I need to do, is just turn on use_hdr_2d in the viewport properties. That’s it. :neutral_face: