Compositing multiple cameras to layer objects at vastly different distances

Godot Version

4.4

Question

Hi all,

I’ve been trying to solve a problem neatly for quite some time now and, despite attempting various different approaches, I’m drawing a blank.

I’m building a game where planetary travel is possible, and I wanted to be able to render stars, planets, moons, etc in the correct places in the sky while standing on the ground. I also need to be able to render local terrain close to the player and distant terrain that could be many kilometers away. I have earth-size (and bigger) planets which do work and you can traverse them, but rendering terrain that can be 10s or 100s of kms away with varying LODs while also rendering objects only cms away is a challenge. Using floats I need to keep the numbers within ranges that aren’t going to overflow anything.

I originally built my own rendering engine to handle this but then switched to Godot at the beginning of 2024. My engine did something pretty basic, it had a number of cameras using different scales (eg, a local camera rendered everything at 1m = 1.0 unit to the camera, a distant camera rendered terrain using 1000m = 1.0, then I had a 1AU:1 camera and a 1LY:1 camera for the planets and stars).

So I would just render each camera in turn starting with the 1LY camera, then clear the depth buffer, then render 1AU to bring nearby planets into the scene, clear the depth buffer, render the 1KM camera, etc. In reality there weren’t four cameras, I just had one camera and moved it between each stage of the rendering process.

When I switched to Godot it seemed like the way to do something similar was to use viewports. I got it working using this, where I’m rendering 4 full-screen viewports, one on top of the other with a transparent background enabled for each one.

It does work but the problem I’m finding is it means I can’t do stuff like FSR, which disables the transparent background, and also it’s adding quite a bit of memory overhead to my project. I’m using about 5gig of RAM to load everything at start. When I add the viewports into the equation this jumps to around 8. My feeling is that I’m doing something ‘hacky’ with the viewports that maybe they weren’t intended to be used for, and having a separate texture for each viewport seems wasteful when I used to be able to render everything to the same surface in one hit.

I’ve been looking for other solutions, but this is where I’m getting stuck. What I really want to do is part way through rendering the scene, clear the depth buffer, switch cameras, and keep rendering - but that doesn’t seem possible. I keep recoding this part of my system and not getting anywhere.

I thought maybe I could at least get FSR working if I took the output of each viewport and added it as a texture to a full-screen quad that appeared underneath the objects in the next viewport in the chain (so, feed the output of one viewport into the background of the next viewport). I thought then the topmost viewport wouldn’t need a transparent bg and FSR could be enabled. This kind of worked, of course it didn’t fix the memory concerns, but more than that I got weird graphical issues when moving the camera to extreme positions.

Is there something obvious I’m missing? With the rendering compositor, can I force the viewports to all render to the same texture or something like that?

Maybe I should settle for using the viewports the way I’ve currently got it working, but every time I set up my 1m camera to use FSR, the result looks so much better it makes me spend more time trying to find a way to ditch the viewports :sweat_smile:

Thanks, any ideas thoughts greatly appreciated,

Chris.

Having written the above, I’ve now found the following:

Which makes me think what I’m after doesn’t really exist at the moment!

Cheers,

Chris.