Renderers and camera objects have generally nothing to do with each other. Camera is just a data structure consisting of several numerical parameters. Changing those parameters one way or another has nothing do with rendering performance. Doing with with tweens or using some mathematical functions is perfectly fine.
You probably want to set a trans and easing value for your tween. Chances are your computer is already running this simulation at the maximum framerate, so interpolating frames isn’t going to help, nor would it smooth the camera motion, only increase framerate.
So generally what is done by CPU and should be moved to multithreaded from blocking main thread , what should be on main thread , and what task is done by GPU and should be coded in shaders ?
Well it depends on what type of performance bottlenecks you’re experiencing. If you’re not experiencing any then there’s no need to think about threads or compute shaders. CPU runs scripts and GPU rasterizes polygons. And that’s pretty much it.
When running my Godot project in full-screen, my FPS drops significantly to between 30-43 FPS. However, in windowed mode, the performance is much better and smoother. I tried this with different project without extra stuff like interpolation, expo nationals, ( math ) and it seem to be doing much better https://youtu.be/7Qq4XRaOuOc
Machine: Mac Studio M2 Max (32GB RAM)
Monitor: Samsung 4K @ 60Hz (connected via HDMI)
Build: Custom Godot arm64 build with Metal 4 support (though the issue also persists on stable).
The Bottleneck: Render Resolution
After debugging, the core issue appears to be the render target resolution.
In Windowed Mode, the game renders at a reasonable 2668x1501.
In Full-Screen Mode, Godot defaults to rendering at a massive 5K resolution (5120x2880), even though my monitor is only 4K and viewport is set to 1920x1080 .
This 5K render target seems to be the entire bottleneck. My Mac is trying to render twice as many pixels as the monitor can display, causing the huge performance drop.
Scene Complexity: The FPS does improve if I lower viewport.scaling_3d_scale = 0.8, which confirms the bottleneck is related to pixel fill-rate (resolution) and not the 3D scene itself.
The actual bottlenecks will become apparent once you run the visual profiler. You’ll be able to see what parts of the rendering pipeline are taking up the most time and optimize accordingly.
You can always render at fixed resolution and scale up to actual screen resolution. It won’t be as crisp but it’ll run at good framerates.
I’ll be honest I’m looking on it as seeing this strange behaviour in windowed mode as size is bigger fps is lower , and full screen is absolute nightmare .
But on my simple test project I never came across this low fps ,
Can you test it out how it runs on your hardware , maybe you can notice where is issue .
It’s from course which I took and learned a few things but sometimes it seem a bit overdoing .
Are you rendering with Metal? Can you try forcing Vulkan and see if you can profile then? Also see what happens if you render with OpenGL by switching to “Compatibility” renderer.
Absolute values don’t really matter. Click on those two largest stacks on the chart to see which parts of the rendering pipeline they represent, so you get some clue where the actual bottlenecks are.
Yes. So you can optimize GI, or make it optional for the player. GI is always expensive so a special care need to be taken with it. It’s not just enough to switch it on. If it’s not significantly contributing to your visual mood, you can simply do away with it completely.
And large opaque pass means you’re bruteforce rendering too much stuff. For start check how many draw calls you typically have (in monitors). If the number is big, see what strategies you can use to bring it down.