Overlaying output from one Camera3D ontop of another

Godot Version

4.5.1

Question

Hi all!Im looking into merging the output of two camera’s, before going onto to doing some image effect shaders further in my pipeline with the result.

Im trying to overlay the output of one camera, call it “Camera B”, onto another camera, call it “Camera A”. Both cameras are Camera3D.

Heres what I want each camera to render, and what I want to achieve;

Camera A - will render physical objects, eg the Player character, enemies, and any fog effects Ive added.

Camera B - will render objects I want to be able to see essentially through walls (or fog!). Think xray vision to see inside of a treasure chest.

Use case: I want to be able to render any objects (just basic sphere meshes) that might spawn inside of a hollow cube of fog.

Current scene tree situation:

Camera A is a child of the player character (and therefor part of the player character’s scene), with the reason for this being that I want the camera to orbit around the player. Handling the camera input as part of the player character scene was suitable to me at the time of implementation, but I am open to restructuring this.

Camera B is part of the main scene, parented to a SubViewport, and I have attached a basic script for the camera node itself to copy the position and rotation of Camera B every frame

Everything I want Camera A to see, ive assigned to VisualInstance3D layer 1 (now marked physical layer). Camera A’s cull mask is set to layer 1 also.

Everything I want Camera B to see, Ive assinged to VisualInstance3D layer 2 (marked xRay layer). Camera B’s layer cull mask is set to layer 2. The SubViewport Camera 2 is parented to has “Tranparent background” set to true, so ideally it will render ONLY the nodes on visual layer 2.

What I have tried so far:

Writing a custom shader with two Sampler2D, one for Camera A (marked with :hint_screen_texture), and each frame, feeding the texture from Camera B to the 2nd Sampler2D. This shader is assigned to a material on a TextureRect. Unfortunately this still only results in the output of Camera A being rendered to the TextureRect.

Does anyone know the correct way to go about this? Happy to post any code etc for context.

Thanks in advance!

Why not just display the Camera B on the texture rect and let Camera A render into the main viewport.

1 Like

Hmm… given the result you’re trying to achieve, there are a couple of ways you can produce a “merged output”. One way is to utilize materials to render certain objects on top of everything by altering the depth test (i.e. force the depth test to be true). The other way is the one you’re currently pursuing: produce a separate render pass (with viewports) which is overlayed onto the screen texture.

As far as I know, there are two main ways to implement full-screen effects (such as this): Full-Screen Quads, and The Compositor. The compositor is a relatively new feature and allows advanced manipulation of the rendering pipeline while a full-screen quad is an old technique still used today to process the final image output. I’m personally using the full-screen quad approach because of its simplicity, and for your current problem, I would argue that a full-screen quad should also be used.

Full-Screen Quad Example

Given that you already have the necessary nodes set up (an extra camera inside a SubViewport), we can focus on how the viewport texture rendered by Camera B is utilized inside the material for the full-screen quad – the material that combines the two passes.

The example shown below is heavily based on the setup described in the documentation, so I suggest reading through that page before continuing.

The material for the full-screen quad consists of two parts: the vertex shader, and the fragment shader. The vertex shader contains the transformation code needed to permanently project the quad mesh onto the frustum of the camera such that it covers the whole screen.

void vertex()
{
	POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}

Next, the two passes are combined in the fragment shader. This is simply done by blending between the two passes based on the overlaying pass’s alpha value. To gain access to the passes in question, a uniform must be created for each texture.

shader_type spatial;
// Prevent the quad from being affected by lighting and fog. This also improves performance.
render_mode unshaded, fog_disabled;

uniform sampler2D passB;
uniform sampler2D screen_texture : hint_screen_texture;

Then, as mentioned, the textures are blended between using trivial liner interpolation (use the built-in mix() function).

void fragment()
{
	vec4 screen = texture(screen_texture, SCREEN_UV);
	vec4 overlay = texture(passB, SCREEN_UV);
	vec3 blended_image = mix(screen, overlay, overlay.a).rgb;

	ALBEDO.rgb = blended_image;
	ALBEDO.a = 1.0;

As you can see, the shader is pretty simple and easy to understand. The only things left to do are:

  • Create a quad mesh in your scene that uses our new shader material.
  • Assign the appropriate viewport texture from camera B’s viewport to the quad’s material.

To assign the viewport texture, you need to make a script and do it at runtime. You’ll have to forgive me on this part because I can’t remember why this is the case. Here’s how to do it:

extends Viewport
@export var quad : MeshInstance3D
func _ready():
	quad.mesh.surface_get_material(0).set_shader_parameter("passB", self.get_texture())

Attach this script to camera B’s subviewport (or add the code to your current viewport script) and you should be all set.


I hope it works! If you have any questions, let me know.

1 Like

Hey! Thank you for such a detailed response!

Its not quite working yet, but,

here are some steps ive taken based on your response:

I had to set “Flip faces” to true on the quad mesh itself (not the mesh instance, the mesh itself).

For anyone reading in future, ALDEBO is a vec3, therefor only represents RGB. To set the alpha, use the ALPHA property.

Ive made a script to setup camera B so it has the same settings and transform as Camera A. For the benefit of others, here is the gdscript code:

class_name FollowMainCamera extends Camera3D

var mainCamera: Camera3D

func _init(mainCam: Camera3D) -> void:
	mainCamera = mainCam
	size = mainCamera.size	
	near = mainCamera.near
	far = mainCamera.far
	fov = mainCamera.fov
	scale = mainCamera.scale

func _process(delta: float) -> void:
	global_position = mainCamera.global_position
	global_rotation = mainCamera.global_rotation

What doesnt seem to be working:

The vertex shader seems to not cover the entire screen, and is stretching the image in an undesirable way (spheres look like ovals). See attached image, ideally the green square (which has the output from camera B) should cover the whole screen, and not be stretched - the small circle here should be, well, circular.

Ive tried to use a control node (Texture Rect) instead of a quad, but for whatever reason this doesnt work either, nothing gets drawn to the textureRect’s texture, and so i just get the output of camera A in play mode. Heres what ive tried for that;

Setting the sampler2D;

extends SubViewport
@export var textureRect : TextureRect
func _ready() -> void:
	textureRect.material.set("passB", self.get_texture())

and adjusting the shader:

shader_type canvas_item; // not spatial anymore

render_mode unshaded;

uniform sampler2D passB;
uniform sampler2D screen_texture : hint_screen_texture;

// doesnt need vertex() shader method anymore

void fragment()
{
	vec4 screen = texture(screen_texture, SCREEN_UV);
	vec4 overlay = texture(passB, SCREEN_UV);
	vec3 blended_image = mix(screen, overlay, overlay.a).rgb;

	COLOR.rgb = blended_image.rgb;
    COLOR.a = 1.0
}

Any further ideas on whats going on?

Thanks again for such a great response!

textureRect.material.set_shader_parameter("passB", self.get_texture())

You can also set it manually in the inspector.

1 Like

Good shout!

Again for the benefit of those reading in future, the texture rect needs to already have a texture assigned to it. This can be any texture type, just add one in and let the shader material do the rest. It seems obvious in hindsight, the shader needs to draw to an already existing texture, it doesnt actually make its own, but an easy oversight for those starting out with shader stuff (like me).

The textureRect seems to work a little better, its fullscreen now, but for whatever reason the output from passB is still stretching so that circles look like ovals… Might be an issue with camera B.. ill keep digging and update if i find anything relevant to the thread!

Thanks again both of you!

Ah, found the issue, the subviewport for camera B needs to have its size property set to the same size as the window (eg the main viewport).

This is the final gdscript code for setting up the textureRect;

extends SubViewport

@export var textureRect: TextureRect

func _ready() -> void:
	size = get_window().size
	textureRect.material.set_shader_parameter("passB", self.get_texture())
	
	# set visibility to false in the inspector so its not in the way... turning it back on.. 
	textureRect.visible = true 

Ill leave this thread open in case anyone wants to contribute more, but Im happy that my problem has been solved by the above discussion.

Thanks again all!!

1 Like