Is it possible to make different servers/idle processing run at different frequency?

In the recent years, some of the higher quality FPS games started separating their engines into different parts, where they run input/character physics/rendering at different processing rates. This makes these games less dependent on having a good PC because even if a player has low rendering FPS, the overall input/hit registration/RPC latency remains relative low and constant(click to photon latency high, click to action latency low), given the game loop/input loop/physics loop run at 1000 FPS.

I was wondering how hard would it be to modify the Godot engine to behave this way? I don’t really understand c++ and definitely not threading, but reading the blog posts about how Godot’s multithreading works, it seems to me it should be possible, given the server based architecture?

I seen people mentioning set_use_ accumulated_input(false), or physics_loop, but these provide absolutely no benefit when it comes to latency. Accumulated_input only benefits drawing accuracy and not latency, the inputs are still buffered until the next frame, and a new frame iteration can only happen when both idle, rendering, and physics loop finished its work, so the entire system latency will limited by the slowest part.

Physics loop can technically run at higher rates, but its fixed by delta time, and not delta time + wall time, which means it will simulate lets say 3 physics step within 1 ms, then sleep for 30 ms, depending of how slow the rendering compared to physics and how many physics calculations are needed per seconds. This is not a reliable solution, and also I don’t think it refreshing the input events unless agile input is implemented(which only works for android).

Is this important? Honestly, for many games I don’t think it is. But there is a reason why FPS games started doing this. Its a more fair system, and I would love if Godot could provide a similar system. This could be useful for every FPS multiplayer game, for driving simulator games(some of them has this system on consoles already), maybe even platformers.

Just for example: FPS game on Steam Deck, 45 FPS, 22 ms between every frame. If a person gets unlucky, sends input right after the tick, gets full 22 ms penalty. This added to network latency + then server processing can add up to quite a bit. Also, since the game ticks at 45 Hz, even if the physics is at 90 rate, the hitreg wont be consistent given we cant tell when exactly the input happened. There are proposals to give time data to input events(which could be used to calculate accurate position for enemy players to ray trace against), while this would help making hit reg more accurate, running the input polling at 1000Hz for example could still give lower latency.

I don’t want to open a proposal because im pretty incoherent I feel like, and I think there are few proposals that might addressing this exact issue, but im not sure. So the core issue I think is different servers wait on each other to finish. Maybe they could just sync with each other in a blocking way like it happens now, but then let each other work, and give them separate max_fps property? Like idle_loop_max_fps. Why does the entire engine has to wait on renderer to finish?

Some games that have this kind of input system:

Reflex Arena
https://www.reddit.com/r/reflex/comments/499s68/regarding_the_responsive_input_in_reflex_official/

Overwatch ( New Feature – High Precision Mouse Input )

Counter-Strike 2(not sure tho, they might only do input events with timestamps?)

Quake3e

Diabotical

Maybe I don’t understand you fully, but

games started separating their engines into different parts

Godot does the same with _process and _physics_process.

given the game loop/input loop/physics loop run at 1000 FPS

Wait, 1000? I don’t think I’ve heard of any game that runs its logic at 1000 FPS. I think that is overkill.

Accumulated_input only benefits drawing accuracy and not latency, the inputs are still buffered until the next frame

This would happen regardless of fps - you only process your inputs on the next frame. Trying to process every input as it happens would complicate your gameplay code a lot, for probably not much benefit in return.

Physics loop can technically run at higher rates, but its fixed by delta time, and not delta time + wall time, which means it will simulate lets say 3 physics step within 1 ms, then sleep for 30 ms, depending of how slow the rendering compared to physics and how many physics calculations are needed per seconds.

What’s wall time? Also, correct me but that’s not my understanding at all. Yes, the point of physics step is to simulate fixed intervals, so that the physics simulation remains consistent. It won’t run 3 steps in 1 ms then sleep though. It is independent of your rendering framerate, and iirc runs as many steps as it needs to catch up. So if you’re running physics at 60fps and the last run was 17ms ago, it runs a single step. If something took longer and the last step was 35ms ago, it runs two steps.

And in every step simulated, the delta time is fixed.

This is not a reliable solution, and also I don’t think it refreshing the input events unless agile input is implemented(which only works for android).

Tbh I didn’t check Godot in-depth in this regard, but usually input events come from the OS as they happen. Not sure if Godot passes these to your game the same way though, or if it buffers them until the next frame.

FPS game on Steam Deck, 45 FPS, 22 ms between every frame. If a person gets unlucky, sends input right after the tick, gets full 22 ms penalty.

And this is the point where I started thinking that this is not actually a game loop issue. Especially if you consider the physics loop setup from above.


So, one of the things I want to highlight is that games often tie their actual gameplay simulation to a fixed FPS. This makes the simulation more stable, and guarantees smoother gameplay, as your movements won’t jitter when the FPS goes from 95 to 72 for a bit, as your simulation is updated at 60fps for example.

Which brings us to the issue of penalty. Your example with 45 fps doesn’t hold up, if the physics loop is locked to 60fps and the hardware can handle it consistently. Then again, if the hardware can’t keep up, there’s no point in upping your physics or input polling or any other kind of FPS, because the device simply can’t do things that fast.


And overall, I think this is more of an issue of how to handle input. For most games, that potential 16ms penalty is fine, because the game is just not that fast. For other cases, you can get away with timestamping your inputs. I.e., you don’t just say “I want to move east” but say “I’ve been wanting to move east for the last 8ms”. This is useful for cases where for example you get the input at t+8ms, and the next frame runs at t+16ms. This way, you can take into account that the input was received between two simulation steps.


Maybe I’m just on the wrong track with your post because there’s some confusion/conflation between input polling frequency and game state update frequency? But again, even if you have really high resolution input data, there’s only so many updates per second that you can reasonably do on your game. And there’s only so many frames you can reasonably render.

Reflex Arena runs the character logic/input handling at 1000 Hz, there is a reddit comment linked in my post. For Overwatch and CS2 im not sure what they doing, but they both advertise accurate mid frame inputs.(Can be just timestamped imputs tho.) This not possible with Godot out of the box.

1K Hz seems high, but not much has to be calculated. Mainly it just has to update mouse position + check if the user is holding down some action button, if does then calculate the action(ray cast + maybe a full physics step for example) a single time.

I can see this could cause issues in logic heavy game, but it could be optimized, like a new a frame iteration could be tied to events like mouse button press, then its not even necessary to have any logic running at high rate, other than input reading from OS and a few checks on those inputs by the engine. But for this, we need to be able to call for a new frame iteration at any given time(if its not iterating at the moment obviously), which is currently not possible, because process, physics and rendering even tho technically separated, they still wait on each other to finish the current frame. Most of the time they will be waiting on the GPU.

By wall time I mean real world time, there is an article about Unitys physics loop that explains it: Fix your (Unity) Timestep! — John Austin

Godot behaves like Unity does under a heavy load. This has advantages, but also means you cannot rely on physics loop to be semi-reliable when it comes to latency. For example: 100 fps physics, 50 fps render. Lets say each physics iteration takes 1 ms. This means, even tho you have technically a loop that calculates with 10 ms delta time, it will have sleeping periods of 18 ms where nothing happens, everyone just waits on the GPU. In Unity, it would be more like 9-10 ms sleep on average. Delta always the same with both engines. But you could image a Ping server done from the physics loop of each engine(just a dumb example :D), Unity would give lower latency responses. This is what I mean by the 45 fps latency penalty, with Godot you cant get over this latency penalty, and you get no benefit by raising the physics loop rate from a latency point(unless you push it to the limit, but then you risk slowing down the rendering by the physics loop, cuz then everything else will wait on that part of the engine, so no point of doing that, and even that wouldn’t work because inputs are not refreshed until next frame).

For input, Godot handles them very well(it never skips input, accurate, and overall low latency), but its definitely collets them until the next frame begins. Checking Time.get_ticks_usec(), they always happen when a new frame is starting. For example, when the renderer runs at 10 fps, input events always have 100 ms gap between them, even with accumulated input off(that will just spam you with every event single event, so 100 event at the beginning every frame for a 1k Hz mouse, so not very useful unless you using the data to draw). So, some of those events are delayed by almost a full frame.

I know it nit picking because most FPS game will run at 150+ fps, but I believe there is a reason why new fps games have these techniques. 30 Fps game have almost 33 ms jitter just based on when you press the input. Thats could be reduced to zero jitter. More consistent hitreg, more consistent network packet latency. Its not about the rendering, but overall game feel. Some of these issues could be solved by timed inputs, but the latency jitter is unsolvable unless we can iterate parts of the engine at different rates, and get input instantly. I know people also want this for physics for example, but it would be very cool if other parts of the engine could be iterated at wish as well, without having to wait for the renderer. I just wonder if this is impossible by design of the engine, or games could achieve this relatively easily with some small engine modification, and be as good as Counter Strike 2 or Overwatch.