Overlaying output from one Camera3D ontop of another

Godot Version

4.5.1

Question

Hi all!Im looking into merging the output of two camera’s, before going onto to doing some image effect shaders further in my pipeline with the result.

Im trying to overlay the output of one camera, call it “Camera B”, onto another camera, call it “Camera A”. Both cameras are Camera3D.

Heres what I want each camera to render, and what I want to achieve;

Camera A - will render physical objects, eg the Player character, enemies, and any fog effects Ive added.

Camera B - will render objects I want to be able to see essentially through walls (or fog!). Think xray vision to see inside of a treasure chest.

Use case: I want to be able to render any objects (just basic sphere meshes) that might spawn inside of a hollow cube of fog.

Current scene tree situation:

Camera A is a child of the player character (and therefor part of the player character’s scene), with the reason for this being that I want the camera to orbit around the player. Handling the camera input as part of the player character scene was suitable to me at the time of implementation, but I am open to restructuring this.

Camera B is part of the main scene, parented to a SubViewport, and I have attached a basic script for the camera node itself to copy the position and rotation of Camera B every frame

Everything I want Camera A to see, ive assigned to VisualInstance3D layer 1 (now marked physical layer). Camera A’s cull mask is set to layer 1 also.

Everything I want Camera B to see, Ive assinged to VisualInstance3D layer 2 (marked xRay layer). Camera B’s layer cull mask is set to layer 2. The SubViewport Camera 2 is parented to has “Tranparent background” set to true, so ideally it will render ONLY the nodes on visual layer 2.

What I have tried so far:

Writing a custom shader with two Sampler2D, one for Camera A (marked with :hint_screen_texture), and each frame, feeding the texture from Camera B to the 2nd Sampler2D. This shader is assigned to a material on a TextureRect. Unfortunately this still only results in the output of Camera A being rendered to the TextureRect.

Does anyone know the correct way to go about this? Happy to post any code etc for context.

Thanks in advance!

Why not just display the Camera B on the texture rect and let Camera A render into the main viewport.

Hmm… given the result you’re trying to achieve, there are a couple of ways you can produce a “merged output”. One way is to utilize materials to render certain objects on top of everything by altering the depth test (i.e. force the depth test to be true). The other way is the one you’re currently pursuing: produce a separate render pass (with viewports) which is overlayed onto the screen texture.

As far as I know, there are two main ways to implement full-screen effects (such as this): Full-Screen Quads, and The Compositor. The compositor is a relatively new feature and allows advanced manipulation of the rendering pipeline while a full-screen quad is an old technique still used today to process the final image output. I’m personally using the full-screen quad approach because of its simplicity, and for your current problem, I would argue that a full-screen quad should also be used.

Full-Screen Quad Example

Given that you already have the necessary nodes set up (an extra camera inside a SubViewport), we can focus on how the viewport texture rendered by Camera B is utilized inside the material for the full-screen quad – the material that combines the two passes.

The example shown below is heavily based on the setup described in the documentation, so I suggest reading through that page before continuing.

The material for the full-screen quad consists of two parts: the vertex shader, and the fragment shader. The vertex shader contains the transformation code needed to permanently project the quad mesh onto the frustum of the camera such that it covers the whole screen.

void vertex()
{
	POSITION = vec4(VERTEX.xy, 1.0, 1.0);
}

Next, the two passes are combined in the fragment shader. This is simply done by blending between the two passes based on the overlaying pass’s alpha value. To gain access to the passes in question, a uniform must be created for each texture.

shader_type spatial;
// Prevent the quad from being affected by lighting and fog. This also improves performance.
render_mode unshaded, fog_disabled;

uniform sampler2D passB;
uniform sampler2D screen_texture : hint_screen_texture;

Then, as mentioned, the textures are blended between using trivial liner interpolation (use the built-in mix() function).

void fragment()
{
	vec4 screen = texture(screen_texture, SCREEN_UV);
	vec4 overlay = texture(passB, SCREEN_UV);
	vec3 blended_image = mix(screen, overlay, overlay.a).rgb;

	ALBEDO.rgb = blended_image;
	ALBEDO.a = 1.0;

As you can see, the shader is pretty simple and easy to understand. The only things left to do are:

  • Create a quad mesh in your scene that uses our new shader material.
  • Assign the appropriate viewport texture from camera B’s viewport to the quad’s material.

To assign the viewport texture, you need to make a script and do it at runtime. You’ll have to forgive me on this part because I can’t remember why this is the case. Here’s how to do it:

extends Viewport
@export var quad : MeshInstance3D
func _ready():
	quad.mesh.surface_get_material(0).set_shader_parameter("passB", self.get_texture())

Attach this script to camera B’s subviewport (or add the code to your current viewport script) and you should be all set.


I hope it works! If you have any questions, let me know.