Hello everyone! I just got done with a game jam, and something I did has been weighing on me because I am unsure if it’s good or bad practice.
BACKGROUND: I had a
Node3D node which had a
Camera node as its child (this
Camera node was the player’s camera). I wanted to add a dither-shader to the player’s viewport, so I made a
Sub-Viewport node (and
Sub-ViewportContainer node) the child of the
Node3D, and I made the
Camera node a child of the
Sub-Viewport. This worked perfectly, but afterwards I felt like there was a better way, something that did not feel so odd. My idea to “fix” this was to…
QUESTION: Is making a
Sub-ViewportContainer the root node of the main-scene (with the
Sub-Viewport as its only child) bad practice? That way every
Camera node in the scene will have the dither-shader on it.
The only thing I see as being a problem, is that it may cause some overhead. Is this true?
I think its usually fine when SubViewport is root node, I think its nice when UI elements are closer to the root and scene elements are down the tree.
Here’s an example of a tree i have in my recent project:
I do think that it becomes a little bit messy if you want more than one viewport parallel to each other. Then it becomes messy because you have multiple cameras, Subviewports and SubviewportContainers have to be close to cameras to capture them and not some other camera (at least i dont think there is a better way to do it?)
Again, example from my recent project
Viewport also has a
parameter which i havent got to use yet, but i would guess that being able to define “those two views are of the same world” and “those 2 views arent” is a usefull tool to maybe to it some different way
I love this! Thank you for the super well made reply!
That being said, I now have another question. When I do make the root node of the scene a
Sub-ViewportContainer, any children
Sub-ViewportContainers will render on top of the parent by default? Or is this something I would have to change? Asking for future reference since you are very knowledgeable!
P.S.: I’m going to wait to mark you as the answer so that other people can still come into the question and give their opinion! Don’t want to deter them quiet yet!
I’m guessing that’s how it works because making a
Viewport the root node of a scene is just making a child of the ACTUAL
Viewport of the engine (that we aren’t allowed to touch), and layering ours on it.
I decided to put that scene through Nvidia Nsight to see order in which engine renders stuff:
P.S.: there being those “white background small objects” parts in rendering i think is a bug of NSight trying to recreate what happened. Just like objects being rendered on screen before they are scanned with that orange/white thing. (I am also using hardware from 2014 and NSight doesnt like that)
It seems that it rendered that SubViewPort as a part of UI when rendering main viewport, so it followed the 3D → UI pattern.
So yes, when talking about “how one viewport renders another” we need to think of SubViewport, well, being a Control node! As an image on UI. And like with control nodes, we can manipulate what renders on top of what. The final Viewport is “taking the snapshot of it all” and sending it to your monitor" is how i would describe it.
WOW! Thank you for the footage of it all rendering in, I’ve never heard of Nvidia NSight! I am pretty new to game dev. mainly a computer science head, so having a debugger like that is amazing!
But anyway, the footage sold me! I see now!