Need advice on Compositor workflow

    public override void _Process(double delta)
    {
        compositorMain.CompositorEffects[0].Set("zone_viewport_texture", RenderingServer.TextureGetRdTexture(this.GetTexture().GetRid()));
    }

Didn’t work. Still a black texture. Here’s the .glsl shader (the zoneImage is the uniform containing the viewport texture):

#[compute]
#version 460

layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
    
layout(rgba8, set = 0, binding = 0) uniform image2D colorImage;

layout(rgba8, set = 1, binding = 1) uniform image2D blurBuffer;

layout(rgba16f, set = 2, binding = 2) uniform image2D zoneImage;
	
layout(push_constant, std430) uniform Params
{
	vec2 raster_size;
	vec2 effect_size;
	vec2 direction;
	vec2 buf;		// Push constant must be a multiple of 16 bytes (???)
} params;

// Source: https://blog.frost.kiwi/dual-kawase/#separable-gaussian-blur
float gaussianWeight(float x, float sigma)
{
	/* e ^ ( - x² / 2 σ² ) */
	return exp(-(x * x) / (2.0 * sigma * sigma));
}

void main()
{
    ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
    vec2 uv_norm = vec2(uv) / params.raster_size;
	ivec2 size = ivec2(params.raster_size);

	// Prevent reading/writing out of bounds.
	if (uv.x >= size.x || uv.y >= size.y) {
		return;
	}
    
    vec4 color = imageLoad(colorImage, uv);
    
    /* Variable to hold our final color for the current pixel */
	vec4 sum = vec4(0.0);
	/* Sum of all weights */
	float weightSum = 0.0;

	int kernel_size = 9;
	vec2 dir = params.direction;
	vec2 frameSizeRCP = vec2(1.0 / size.x, 1.0 / size.y);

	/* Sample along the direction vector (horizontal or vertical) */
	for (int i = -kernel_size; i <= kernel_size; ++i) {
		/* Calculate the required weight for this 1D sample */
		float w = gaussianWeight(float(i), 3.0);
		
		/* Offset from the current pixel along the specified direction */
		ivec2 offset = ivec2(vec2(i) * dir);

		/* Read and sum up the contribution of that pixel, weighted */
		sum += imageLoad(colorImage, uv + offset) * w;
		weightSum += w;
	}

	vec4 blurred = (sum / weightSum);

	// vec4 zoneImage = texture(zoneImage, uv_norm);
	vec4 zone = imageLoad(zoneImage, uv);

	// Write back to our color buffer.
	imageStore(colorImage, uv, zone);
}

Not sure why you’re working with so many moving parts at this stage. Bring it down to a minimal case.

First, setup a compositor shader that just outputs a constant color to verify that the shader is properly invoked per pixel. If that is working, as a next iteration, setup a barebones passthrough shader that just displays the viewport texture.

So do you have the constant color shader operational?

Like I said previously, I got the multi-pass blur working. The problem is regarding the use of viewport textures in compositor effects.

Changing the shader to output the blurred produces the following blurred image.

I feel like this is what I’ve been doing. If you know what a barebones setup (that works) looks like, maybe you could share one?

Post your passthrough shader code, your _render_callback() code, your viewport render target properties in the inspector (including the render preview), and the results you’re seeing in the main viewport.

You know what – I’m tired of this. The only thing that can help me here is someone explaining how to utilize a viewport’s texture in the compositor. If you can’t do that (or can’t provide some sample code for how to do so), you can’t really help me.

Fair enough. I used this on multiple occasions and even posted a complete solution for a user here some weeks ago. You might want to search the forum. It does work as expected, you probably made an oversight somewhere.

You can post your minimal example if you want to get it looked at.
I won’t be posting any additional ready made code.

Here

1 Like

Good news! I got a compositor effect that succesfully renders the texture of a viewport created via RenderingServer.create_viewport()!

Godot_Compositor_ViewportEffect

As you can see, the texture provides the correct color space. However, there is an issue with the pixel alignment which causes a minor artefact especially visible on edges. I’m not sure if this is a result of incorrect usage of UVs, or if it has to do with how the texture is sampled. It does look like the viewport texture is offset by half a pixel in the y-direction – not sure.


While this is good news, I could not get the compositor effect to run in the editor. Whenever I would set the viewport to be active (i.e. invoking RenderingServer.viewport_set_active()), Godot would just crash. I’m not sure why. If you have an idea, please let me know.

The effect runs solely using the scripts seen below; no specific node tree setup is required beyond a Camera3D with the below compositor effect applied to its compositor. Objects are rendered to either viewport based on their visibility layers (layer 1 is the default, and layer 2 is used by the new viewport’s camera).

Compositor Effect Script
extends CompositorEffect
class_name ViewportEffect

var shader_file = preload("res://shaders/viewport_shader.glsl")
var rd: RenderingDevice
var shader: RID
var pipeline: RID

var vp_tex: RID
var linear_sampler: RID
var nearest_sampler: RID

var viewport: RID
var vp_cam: RID

@export_range(0.0, 1.0, 0.0) var cutoff_point: float = 0.5

var root: Viewport:
	get:
		if (Engine.is_editor_hint()):
			return EditorInterface.get_editor_viewport_3d()
		else:
			var tree: SceneTree = Engine.get_main_loop() as SceneTree
			
			print("Main Loop is tree: %s" % (Engine.get_main_loop() is SceneTree))
			print("Root is valid: %s" % (tree.root != null))
			print("Root viewport is valid: %s" % (tree.root.get_viewport() != null))
			
			return tree.root.get_viewport()

func _init():
	effect_callback_type = CompositorEffect.EFFECT_CALLBACK_TYPE_POST_SKY
	rd = RenderingServer.get_rendering_device()
	shader = rd.shader_create_from_spirv(shader_file.get_spirv())
	pipeline = rd.compute_pipeline_create(shader)
	
	# Create samplers
	var sampler_state = RDSamplerState.new()
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	linear_sampler = rd.sampler_create(sampler_state)
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	nearest_sampler = rd.sampler_create(sampler_state)
	
	# Defer viewport setup to make sure the scene tree has been created and initialized.
	call_deferred("setup_viewport")

func setup_viewport():
	
	vp_cam = RenderingServer.camera_create()
	viewport = RenderingServer.viewport_create()
	
	# Configure camera
	var cam_main = root.get_camera_3d()
	RenderingServer.camera_set_transform(vp_cam, cam_main.global_transform)
	RenderingServer.camera_set_perspective(vp_cam, cam_main.fov, cam_main.near, cam_main.far)
	RenderingServer.camera_set_cull_mask(vp_cam, 1 << 1)
	
	# Configure viewport
	RenderingServer.viewport_attach_camera(viewport, vp_cam)
	RenderingServer.viewport_set_update_mode(viewport, RenderingServer.VIEWPORT_UPDATE_ALWAYS)
	RenderingServer.viewport_set_clear_mode(viewport, RenderingServer.VIEWPORT_CLEAR_ALWAYS)
	# Failing to set this crashes the game at runtime.
	RenderingServer.viewport_set_scenario(viewport, root.world_3d.scenario)
	# Setting this crashes the editor (i.e. when this script is a @tool script).
	RenderingServer.viewport_set_active(viewport, true)
	#RenderingServer.viewport_set_use_hdr_2d(viewport, true)
	#RenderingServer.viewport_set_parent_viewport(viewport, root)
	
	update_viewport_size(root.size)
	
func update_viewport_size(size: Vector2i):
	print("Updating viewport size (new: %s)" % size)
	#mutex.lock()
	RenderingServer.viewport_set_size(viewport, size.x, size.y)
	#mutex.unlock()
	
	if (vp_tex.is_valid()):
		RenderingServer.free_rid(vp_tex)
	
	vp_tex = RenderingServer.texture_get_rd_texture(RenderingServer.viewport_get_texture(viewport), true)

func _render_callback(callback_type, render_data):
	# get frame buffers
	var render_scene_buffers: RenderSceneBuffersRD = render_data.get_render_scene_buffers()

	var tf = rd.texture_get_format(vp_tex)
	#print("Viewport texture format: %s" % (tf.format))
	#print("Viewport texture size: %s" % Vector2i(tf.width, tf.height))
	var intern_size = render_scene_buffers.get_internal_size()
	if (tf.width != intern_size.x || tf.height != intern_size.y):
		update_viewport_size(intern_size)

	# main render image uniform
	var u_main: RDUniform = RDUniform.new()
	u_main.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
	u_main.binding = 0
	u_main.add_id(render_scene_buffers.get_color_layer(0))
	# vp render image uniform
	var u_vp: RDUniform = RDUniform.new()
	u_vp.uniform_type = RenderingDevice.UNIFORM_TYPE_SAMPLER_WITH_TEXTURE
	u_vp.binding = 1
	u_vp.add_id(nearest_sampler)
	u_vp.add_id(vp_tex)
	# uniform set
	var uniform_set = UniformSetCacheRD.get_cache(shader, 0, [u_main, u_vp])

	# calculate number of workgroups
	var image_size = render_scene_buffers.get_internal_size()
	var wgroups_count_x = (image_size.x - 1) / 8 + 1
	var wgroups_count_y = (image_size.y - 1) / 8 + 1

	# size uniform (for split screen mixing and invocation "crop")
	var push_constant: PackedFloat32Array = PackedFloat32Array()
	push_constant.push_back(image_size.x)
	push_constant.push_back(image_size.y)
	push_constant.push_back(cutoff_point)
	push_constant.push_back(0.0)

	# execute shader
	var compute_list:= rd.compute_list_begin()
	rd.compute_list_bind_compute_pipeline(compute_list, pipeline)
	rd.compute_list_bind_uniform_set(compute_list, uniform_set, 0)
	rd.compute_list_set_push_constant(compute_list, push_constant.to_byte_array(), push_constant.size() * 4)
	rd.compute_list_dispatch(compute_list, wgroups_count_x, wgroups_count_y, 1)
	rd.compute_list_end()

GLSL Shader
#[compute]
#version 460

layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
layout(rgba16f, set = 0, binding = 0) uniform image2D screen;
layout(set = 0, binding = 1) uniform sampler2D viewport;

layout(push_constant) uniform Params {
	vec2 image_size;
	vec2 offset;    // ...and padding (only x-value is used)
} params;

void main()
{
    ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
    vec2 uv_norm = uv / params.image_size;
    
    // SCREEN and VIEWPORT texture
    vec4 color = imageLoad(screen, uv);
    vec4 vp = texture(viewport, uv_norm);
    
    // SCREEN/VIEWPORT blending
    float t = step(params.offset.x * params.image_size.x, uv.x);
    vec4 newColor = vp * t + color * (1.0 - t);
    
    // Threshold line
    const float lineWidth = 10.0;
    float lineMask = clamp(abs((params.offset.x * params.image_size.x) - uv.x) / lineWidth, 0.0, 1.0);
    
    imageStore(screen, uv, newColor * lineMask);
}
Control Script (for controlling the effect)
extends Camera3D

func _process(delta: float) -> void:
	
	if (Input.is_mouse_button_pressed(MOUSE_BUTTON_LEFT)):
		var norm_mouse_pos = get_viewport().get_mouse_position() / Vector2(get_viewport().size)
		if (compositor.compositor_effects[0] != null):
			compositor.compositor_effects[0].cutoff_point = norm_mouse_pos.x

Use image2D instead of sampler2D. Sampler is likely doing some interpolation whereas image will work directly on pixels.

The setup in my example works for editor and runtime camera. You may want to copy the setup verbatim and modify from there.

I don’t think your node-based example applies here. While it was very helpful in getting this effect working, I tried to insert the viewport texture (from RenderingServer.viewport_get_texture()) into the shader as an image uniform but it refused to do so. The game simply crashes on startup.

It doesn’t matter if it’s node based or not. Nodes use the rendering server the same way your code uses it, just that your code might have forgotten to make a call or two.

As said multiple times above you can’t just send RenderingServer.viewport_get_texture() to the shader. You need to do texture_get_rd_texture(). That’s precisely what my example does and it wouldn’t make any difference if the viewport was created by a node instead by the script code. The problem is likely in your viewport creation/setup code. As a test try to set up a viewport via the code and just display its texture on a texture rect, without touching the compositor. If it displays properly it’ll likely work properly with the compute shader.

Do I need to make that example as well? :smiley:

Also note that your shader is not “cropping” the images properly. Invocations are multiples of 8 for performance reasons but textures might not be. So if you don’t check for this, the shader will try to write across buffer bounds potentially resulting in all sorts of undefined behaviors. Look at the example shader code. That early exit at the top of main() prevents invocations from causing buffer overflows. You should implement that to avoid eventual problems.

You don’t need to help if you don’t want to. You’re continuously operating on your own interpretation of my problem instead of looking at the things I provide. If you looked at the code, it’s plainly visible that I utilize RenderingServer.texture_get_rd_texture().

Moreover, If your only real advice is to simplify my setup, there’s no point. Prior to implementing my current result, I already copied the node-based setup you described in the other post (which worked btw). The only thing is that, that is not what I’m going for.

You clearly don’t know yet how to make it work without nodes, and that’s fine – neither do I. But then don’t start being a smart-ass to someone looking for help by asking if you really need to make another example as well. I’m on shaky ground here and I don’t need your arrogant attitude and your inconsequential tips that, if you tested them yourself, you know don’t make a difference.

And you’re making helluva lot of stupid personal assumptions about me. Let me return the favor: Your debug kung fu is weak

Change the shader uniform back to image and replace the compositor effect with the following.
Since you’re too smart for my explanations, diff it yourself:

extends CompositorEffect
class_name ViewportEffect

var shader_file = preload("res://shaders/viewport_shader.glsl")
var rd: RenderingDevice
var shader: RID
var pipeline: RID

var vp_tex: RID
var linear_sampler: RID
var nearest_sampler: RID

var viewport: RID
var vp_cam: RID

@export_range(0.0, 1.0, 0.0) var cutoff_point: float = 0.5

var root: Viewport:
	get:
		if (Engine.is_editor_hint()):
			return EditorInterface.get_editor_viewport_3d()
		else:
			var tree: SceneTree = Engine.get_main_loop() as SceneTree
			
			print("Main Loop is tree: %s" % (Engine.get_main_loop() is SceneTree))
			print("Root is valid: %s" % (tree.root != null))
			print("Root viewport is valid: %s" % (tree.root.get_viewport() != null))
			
			return tree.root.get_viewport()

func _init():
	effect_callback_type = CompositorEffect.EFFECT_CALLBACK_TYPE_POST_TRANSPARENT
	rd = RenderingServer.get_rendering_device()
	shader = rd.shader_create_from_spirv(shader_file.get_spirv())
	pipeline = rd.compute_pipeline_create(shader)
	
	# Create samplers
	var sampler_state = RDSamplerState.new()
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	linear_sampler = rd.sampler_create(sampler_state)
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	nearest_sampler = rd.sampler_create(sampler_state)
	
	# Defer viewport setup to make sure the scene tree has been created and initialized.
	call_deferred("setup_viewport")

func setup_viewport():
	
	vp_cam = RenderingServer.camera_create()
	viewport = RenderingServer.viewport_create()
	
	# Configure camera
	var cam_main = root.get_camera_3d()
	RenderingServer.camera_set_transform(vp_cam, cam_main.global_transform)
	RenderingServer.camera_set_perspective(vp_cam, cam_main.fov, cam_main.near, cam_main.far)
	#RenderingServer.camera_set_cull_mask(vp_cam, 1 << 1)
	
	# Configure viewport
	RenderingServer.viewport_attach_camera(viewport, vp_cam)
	RenderingServer.viewport_set_update_mode(viewport, RenderingServer.VIEWPORT_UPDATE_ALWAYS)
	RenderingServer.viewport_set_clear_mode(viewport, RenderingServer.VIEWPORT_CLEAR_ALWAYS)
	# Failing to set this crashes the game at runtime.
	RenderingServer.viewport_set_scenario(viewport, root.world_3d.scenario)
	# Setting this crashes the editor (i.e. when this script is a @tool script).
	RenderingServer.viewport_set_active(viewport, true)
	#RenderingServer.viewport_set_use_hdr_2d(viewport, true)
	#RenderingServer.viewport_set_parent_viewport(viewport, root)
	
	update_viewport_size(root.size)
	
func update_viewport_size(size: Vector2i):
	print("Updating viewport size (new: %s)" % size)
	#mutex.lock()
	RenderingServer.viewport_set_size(viewport, size.x, size.y)
	#mutex.unlock()
	
	if (vp_tex.is_valid()):
		RenderingServer.free_rid(vp_tex)
	
	vp_tex = RenderingServer.texture_get_rd_texture(RenderingServer.viewport_get_texture(viewport))

func _render_callback(callback_type, render_data):
	# get frame buffers
	var render_scene_buffers: RenderSceneBuffersRD = render_data.get_render_scene_buffers()

	var tf = rd.texture_get_format(vp_tex)
	#print("Viewport texture format: %s" % (tf.format))
	#print("Viewport texture size: %s" % Vector2i(tf.width, tf.height))
	var intern_size = render_scene_buffers.get_internal_size()
	if (tf.width != intern_size.x || tf.height != intern_size.y):
		update_viewport_size(intern_size)

	# main render image uniform
	var u_main: RDUniform = RDUniform.new()
	u_main.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
	u_main.binding = 0
	u_main.add_id(render_scene_buffers.get_color_layer(0))
	# vp render image uniform
	var u_vp: RDUniform = RDUniform.new()
	u_vp.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
	u_vp.binding = 1
	#u_vp.add_id(nearest_sampler)
	u_vp.add_id(vp_tex)
	# uniform set
	var uniform_set = UniformSetCacheRD.get_cache(shader, 0, [u_main, u_vp])

	# calculate number of workgroups
	var image_size = render_scene_buffers.get_internal_size()
	var wgroups_count_x = (image_size.x - 1) / 8 + 1
	var wgroups_count_y = (image_size.y - 1) / 8 + 1

	# size uniform (for split screen mixing and invocation "crop")
	var push_constant: PackedFloat32Array = PackedFloat32Array()
	push_constant.push_back(image_size.x)
	push_constant.push_back(image_size.y)
	push_constant.push_back(cutoff_point)
	push_constant.push_back(0.0)

	# execute shader
	var compute_list:= rd.compute_list_begin()
	rd.compute_list_bind_compute_pipeline(compute_list, pipeline)
	rd.compute_list_bind_uniform_set(compute_list, uniform_set, 0)
	rd.compute_list_set_push_constant(compute_list, push_constant.to_byte_array(), push_constant.size() * 4)
	rd.compute_list_dispatch(compute_list, wgroups_count_x, wgroups_count_y, 1)
	rd.compute_list_end()

1 Like

Looks like its the srgb parameter of RenderingServer.texture_get_rd_texture() that crashes the game – but only if srgb=true and the texture is used as an image uniform. Perhaps there’s a GitHub issue that can be formulated here. I would expect a warning or an error message of some kind, not a straight-up crash when running the game.

Related code examples

This is fine:

RenderingServer.texture_get_rd_texture(RenderingServer.viewport_get_texture(viewport), true)
# [...]
var u_vp: RDUniform = RDUniform.new()
u_vp.uniform_type = RenderingDevice.UNIFORM_TYPE_SAMPLER_WITH_TEXTURE
u_vp.binding = 1
u_vp.add_id(nearest_sampler)
u_vp.add_id(vp_tex)

This is not fine:

RenderingServer.texture_get_rd_texture(RenderingServer.viewport_get_texture(viewport), true)
# [...]
var u_vp: RDUniform = RDUniform.new()
u_vp.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
u_vp.binding = 1
u_vp.add_id(vp_tex)

Before changing my compositor effect to use an image (image2D) instead of a texture sampler (sampler2D), I wanted to see if I could fix the sampling artefact seen in my previous example (the weird aliasing). I tried setting the sampler’s RDSamplerState.unnormalized_uvw to true which, supposedly, should change the expected UV range from [0,1] to [0, texture_size] – essentially mimicing the same range expected when sampling image2Ds using imageLoad(). My thinking was that if the sampler were to use this range, it would be impossible for it to offset the sampling given that the UV would always be an integer-based vector (ivec2). From my testing though, it doesn’t seem to affect the sampler’s UV range whatsoever. Nothing changed.

As such, I decided to change the uniform to an image2D. The image is in sRGB color space though which, from my testing, is not the expected color space in the middle of the rendering pipeline. The image is, therefore, converted to linear color space to produce the expected results.

Original color space (left) and the image after srgb-to-linear conversion (right)

Godot_Compositor_ViewportEffect_OriginalColorSpace
Godot_Compositor_ViewportEffect_ConvertedColorSpace

Shader conversion code
// ==================== COLOR CONVERSION HELPER FUNCTIONS ====================
// Source: https://physicallybased.info/tools/
float lin_to_gamma(float value)
{
    if (value < 0.0031308)
        return value * 12.92;
    else
        return 1.055 * pow(value, 0.41666) - 0.055;
}

float gamma_to_lin(float value)
{
    if (value < 0.04045)
        return value * 0.0773993808;
    else 
        return pow(value * 0.9478672986 + 0.0521327014, 2.4);
}

vec3 l2g(vec3 value)
{
    return vec3(
        lin_to_gamma(value.r),
        lin_to_gamma(value.g),
        lin_to_gamma(value.b)
    );
}

vec3 g2l(vec3 value)
{
    return vec3(
        gamma_to_lin(value.r),
        gamma_to_lin(value.g),
        gamma_to_lin(value.b)
    );
}
// ===========================================================================

I appreciate the help @normalized. It would just be nice if you didn’t have to make me jump through all these hoops to get it. I just wanted to know how to work with viewport textures in the compositor, and perhaps an explanation as to how it all works – not a constant insistence to go back to basics.

This has been long – but I feel like I am finally confident enough with the way viewports and their textures work in the context of the compositor. I still can’t get it to work in the editor (it still crashes), but that’s okay. The complete code used can be seen below.

CompositorEffectViewport.gd
extends CompositorEffect
class_name ViewportEffect

var shader_file = preload("res://shaders/viewport_shader.glsl")
var rd: RenderingDevice
var shader: RID
var pipeline: RID

var vp_tex: RID
var linear_sampler: RID
var nearest_sampler: RID

var viewport: RID
var vp_cam: RID

@export_range(0.0, 1.0, 0.01) var cutoff_point: float = 0.5

var root: Viewport:
	get:
		if (Engine.is_editor_hint()):
			return EditorInterface.get_editor_viewport_3d()
		else:
			var tree: SceneTree = Engine.get_main_loop() as SceneTree
			
			print("Main Loop is tree: %s" % (Engine.get_main_loop() is SceneTree))
			print("Root is valid: %s" % (tree.root != null))
			print("Root viewport is valid: %s" % (tree.root.get_viewport() != null))
			
			return tree.root.get_viewport()

func _init():
	effect_callback_type = CompositorEffect.EFFECT_CALLBACK_TYPE_POST_SKY
	rd = RenderingServer.get_rendering_device()
	shader = rd.shader_create_from_spirv(shader_file.get_spirv())
	pipeline = rd.compute_pipeline_create(shader)
	
	# Create samplers
	var sampler_state = RDSamplerState.new()
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_LINEAR
	linear_sampler = rd.sampler_create(sampler_state)
	sampler_state.min_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	sampler_state.mag_filter = RenderingDevice.SAMPLER_FILTER_NEAREST
	nearest_sampler = rd.sampler_create(sampler_state)
	
	# Defer viewport setup to make sure the scene tree has been created and initialized.
	call_deferred("setup_viewport")

func setup_viewport():
	
	vp_cam = RenderingServer.camera_create()
	viewport = RenderingServer.viewport_create()
	
	# Configure camera
	var cam_main = root.get_camera_3d()
	RenderingServer.camera_set_transform(vp_cam, cam_main.global_transform)
	RenderingServer.camera_set_perspective(vp_cam, cam_main.fov, cam_main.near, cam_main.far)
	RenderingServer.camera_set_cull_mask(vp_cam, 1 << 1)
	
	# Configure viewport
	RenderingServer.viewport_attach_camera(viewport, vp_cam)
	RenderingServer.viewport_set_update_mode(viewport, RenderingServer.VIEWPORT_UPDATE_ALWAYS)
	RenderingServer.viewport_set_clear_mode(viewport, RenderingServer.VIEWPORT_CLEAR_ALWAYS)
	# Failing to set this crashes the game at runtime.
	RenderingServer.viewport_set_scenario(viewport, root.world_3d.scenario)
	# Setting this crashes the editor (i.e. when this script is a @tool script).
	RenderingServer.viewport_set_active(viewport, true)
	#RenderingServer.viewport_set_use_hdr_2d(viewport, true)
	#RenderingServer.viewport_set_parent_viewport(viewport, root)
	
	update_viewport_size(root.size)
	
func update_viewport_size(size: Vector2i):
	print("Updating viewport size (new: %s)" % size)
	#mutex.lock()
	RenderingServer.viewport_set_size(viewport, size.x, size.y)
	#mutex.unlock()
	
	if (vp_tex.is_valid()):
		RenderingServer.free_rid(vp_tex)
	
	# NOTE: 'srgb' parameter can not be true when using the texture as an image uniform.
	# 		Make the linear conversion in the shader (if this is used as an image2D).
	vp_tex = RenderingServer.texture_get_rd_texture(RenderingServer.viewport_get_texture(viewport))

func _render_callback(callback_type, render_data):
	# get frame buffers
	var render_scene_buffers: RenderSceneBuffersRD = render_data.get_render_scene_buffers()

	var tf = rd.texture_get_format(vp_tex)
	#print("Viewport texture format: %s" % (tf.format))
	#print("Viewport texture size: %s" % Vector2i(tf.width, tf.height))
	var intern_size = render_scene_buffers.get_internal_size()
	if (tf.width != intern_size.x || tf.height != intern_size.y):
		update_viewport_size(intern_size)

	# main render image uniform
	var u_main: RDUniform = RDUniform.new()
	u_main.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
	u_main.binding = 0
	u_main.add_id(render_scene_buffers.get_color_layer(0))
	# vp render image uniform
	var u_vp: RDUniform = RDUniform.new()
	u_vp.uniform_type = RenderingDevice.UNIFORM_TYPE_IMAGE
	u_vp.binding = 1
	#u_vp.add_id(nearest_sampler)
	u_vp.add_id(vp_tex)
	# uniform set
	var uniform_set = UniformSetCacheRD.get_cache(shader, 0, [u_main, u_vp])

	# calculate number of workgroups
	var image_size = render_scene_buffers.get_internal_size()
	var wgroups_count_x = (image_size.x - 1) / 8 + 1
	var wgroups_count_y = (image_size.y - 1) / 8 + 1

	# size uniform (for split screen mixing and invocation "crop")
	var push_constant: PackedFloat32Array = PackedFloat32Array()
	push_constant.push_back(image_size.x)
	push_constant.push_back(image_size.y)
	push_constant.push_back(cutoff_point)
	push_constant.push_back(0.0)

	# execute shader
	var compute_list:= rd.compute_list_begin()
	rd.compute_list_bind_compute_pipeline(compute_list, pipeline)
	rd.compute_list_bind_uniform_set(compute_list, uniform_set, 0)
	rd.compute_list_set_push_constant(compute_list, push_constant.to_byte_array(), push_constant.size() * 4)
	rd.compute_list_dispatch(compute_list, wgroups_count_x, wgroups_count_y, 1)
	rd.compute_list_end()

viewport_shader.glsl
#[compute]
#version 460

layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
layout(rgba16f, set = 0, binding = 0) uniform image2D screen;
layout(rgba8, set = 0, binding = 1) uniform image2D viewport;

layout(push_constant) uniform Params {
	vec2 image_size;
	vec2 offset;    // ...and padding (only x-value is used)
} params;

// ==================== COLOR CONVERSION HELPER FUNCTIONS ====================
// Source: https://physicallybased.info/tools/
float lin_to_gamma(float value)
{
    if (value < 0.0031308)
        return value * 12.92;
    else
        return 1.055 * pow(value, 0.41666) - 0.055;
}

float gamma_to_lin(float value)
{
    if (value < 0.04045)
        return value * 0.0773993808;
    else 
        return pow(value * 0.9478672986 + 0.0521327014, 2.4);
}

vec3 l2g(vec3 value)
{
    return vec3(
        lin_to_gamma(value.r),
        lin_to_gamma(value.g),
        lin_to_gamma(value.b)
    );
}

vec3 g2l(vec3 value)
{
    return vec3(
        gamma_to_lin(value.r),
        gamma_to_lin(value.g),
        gamma_to_lin(value.b)
    );
}
// ===========================================================================

void main()
{
    ivec2 uv = ivec2(gl_GlobalInvocationID.xy);
    vec2 uv_norm = uv / params.image_size;
    
    if(uv.x >= params.image_size.x || uv.y >= params.image_size.y){
		return;
	}
    
    vec4 color = imageLoad(screen, uv);
    
    // IN-BETWEEN CODE
    vec4 vp = imageLoad(viewport, uv);
    vp.rgb = g2l(vp.rgb);
    // vec4 vp = texture(viewport, uv_norm);
    
    // IMAGE/VIEWPORT Blending
    float t = step(params.offset.x * params.image_size.x, uv.x);
    vec4 newColor = vp * t + color * (1.0 - t);
    
    // Threshold line
    const float lineWidth = 10.0;
    float lineMask = clamp(abs((params.offset.x * params.image_size.x) - uv.x) / lineWidth, 0.0, 1.0);
    
    imageStore(screen, uv, newColor * lineMask);
}
EffectControls.gd
extends Camera3D

func _process(delta: float) -> void:
	
	if (Input.is_mouse_button_pressed(MOUSE_BUTTON_LEFT)):
		var norm_mouse_pos = get_viewport().get_mouse_position() / Vector2(get_viewport().size)
		if (compositor.compositor_effects[0] != null):
			compositor.compositor_effects[0].cutoff_point = norm_mouse_pos.x

It’s in srgb because you commented out viewport_set_use_hdr_2d(viewport, true), so the viewport texture doesn’t end up in linear space. If this is enabled, everything should be kept linear and you won’t need to do any manual conversions in the shader. Also that second argument to texture_get_rd_texture() wont’t cause any trouble then. In fact it appears to be ignored and the texture is always linear.

Using a sampler uniform only makes sense if composited textures don’t match in pixel size, which is not the case here.

P.S. Nice to see you marked your post as a solution after I provided the whole working example, and debugged your faulty adaptation of it into an actually working version that effectively solves you problem, while at the same time complimenting me that I don’t know what I’m doing. Well played.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.