Nophys is an experimental project where I play around with the engine, making a 3d game without using dedicated physics engine (default, Jolt, etc.) provided through Godot. I don’t have any idea currently to make an interesting game with somewhat a good story, so I just tinker around with this idea of no usage of physics engine for the game in my free time. Hopefully, good idea/story will popup while working on this project .
The idea was inspired from a video made by lawnjelly where he working on a feature called Navigation Physics. There’s also a proposal made by him for this particular feature.
Since not all type of 3d games will work without physics, I want to make a game similar to Digimon Story & Persona series. The player movement for these series and some older JRPG were mostly limited with overhead/static camera, mostly static or non-collide able moving NPC and had turn-based battle system where physics engine is not necessarily required but convenient.
Still, that’s sounds and looks very ambitious for a one person (who still have no game idea) to make. Instead of JRPG, let me reword it into a low poly 3d Visual Novel game with couple of turn-based battle in-between.
Now, that sounds achievable
Why?
No particular reason. Just intrigue with the idea & want to tinker/play around with Godot Engine while waiting for some game idea to materialized and do some devlog at Godot forum. Oh, I’m also experimenting with other language; Swift. It’s mostly a devlog about me trying stuff and failed at it.
How?
At least for the player, it will extends from the regular Node3D instead of using CharacterBody3D/RigidBody3D and make it move by manipulating its position with the help of navigation features (NavigationRegion3D, NavigationAgent3D, etc.) to provide the path and boundary of a walk-able area. Workaround also needed when useful nodes/features for 3d games which make use of physics engine such as SpringArm3D to avoid camera clipping through wall, and RayCast3D/Area3D for body/area collision check are unavailable.
I have put some work already on this project which I will update periodically in this topic. Not much, but enough content for another post and the workaround I’ve done was mostly math. I mean a lot of math. I’m not that smart, but fortunately It’s 2025 and there are a lot of works that already being done by great people which I can use and learn from, including Godot Engine where a lot of math related helper function available such as Geometry2D & Geometry3D. So, hopefully this will work somehow
The player will be controlled as a 3rd person within a walkable area provided through NavigationRegion3D by baking NavigationMesh or from GLTF file with -navmesh name suffix.
# Player
func _physics_process(delta: float) -> void:
var input_dir := Input.get_vector("left", "right", "forward", "backward")
var dir := Vector3(input_dir.x, 0.0, input_dir.y)
dir = dir.normalized()
dir.y = 0.5
var curr_pos := global_position
var target_pos := current_pos + dir
# Set agent's target
navigation_agent_3d.target_position = target_pos
if not navigation_agent_3d.is_navigation_finished():
# Return position within the NavigationMesh area
var move_dir := navigation_agent_3d.get_next_path_position() - curr_pos
# Move player
Direction vector was obtained from player input and set it as the agent’s target. Then, the player position can be moved to a position return by the get_next_path_position() or get_final_position(), which also can be used. Note that checking for is_navigation_finished() for every frame might be expensive, where boolean check or navigation_finished signal can be used instead.
Mini GIF Tutorial
Somehow I got myself distracted to make GIFs using ffmpeg for splitting the video into frames, which then import into GNU Image Manipulation Program for final editing. I ended up with a gif with a file size larger than the original video, but it nice to have a visualization that loop forever.
-b:v and -r options are the video’s bitrate and framerate where 24/1 means 24 frames per second. The output frames can be reduce by changing the -r value such as 24/3 which means 24 frames per 3 second.
The video’s information can be shown by using ffprobe which usually installed alongside ffmpeg.
ffprobe input.mp4
Moving on, inside GIMP from the menu bar, File > Open as Layers.... Before exporting, the playback can be previewed from Filters > Animation > Playback.... Some optimization in term of file size can be done through Filters > Animation > Optimize (for GIF), but it can affect the quality of each frame. Since it copy each layer before applying the filter into another working file, it can be tested without losing original layers. Finally, it can be exported to a gif file from File > Export As....
At the time writing this post, found a command line program gifsicle for manipulating animated GIFs.
gifsicle --scale 0.5 -optimize -O3 animation.gif
Camera for 3rd Person
As seen from the gifs above, the camera does clipped into the wall since the usual class/node for the job, SpringArm3D is unavailable. So, need to check intersections with the geometry itself.
Inside the camera scene, contains a greenish colored sphere which represent target/camera postion, and a red-ish sphere marking the max distance of the camera position at local Z-axis from the scene’s origin. By default, the camera will move toward the end position, and will move to an intersected point if collided. Typical SpringArm3D setup.
Walls in this case are flat surface with 90 degree inclination which form a map outer boundary. In Godot, there’s a resource called PolygonPathfinder which provide a 2d polygon with a useful methods such as get_intersections(from: Vector2, to: Vector2) & is_point_inside(point: Vector2). This resource only involve 2 axis, but it can be use here while omitting Y-axis value and only operate on XZ-axis, making sure the camera stay inside of the map’s outer boundary. It’s like viewing the scene from the top where only 2 axis does matter.
For a simple map like square, the resource would be a square polygon with 4 edges, and check if there are any intersection between the camera segment and map edges at every frame. If yes, move camera to the intersection point which is in front of the wall.
Sets up PolygonPathFinder with an array of points that define the vertices of the polygon, and an array of indices that determine the edges of the polygon. The length of connections must be even, returns an error if odd.
var polygon_path_finder = PolygonPathFinder.new()
var points = [Vector2(0.0, 0.0), Vector2(1.0, 0.0), Vector2(0.0, 1.0)]
var connections = [0, 1, 1, 2, 2, 0]
polygon_path_finder.setup(points, connections)
Now, where to get the vertices? I first tried getting the value from the NavigationMesh itself using get_vertices() & get_polygon(idx: int), but there are some undesired thing happened like duplicated and unsorted vertices position. It might be something wrong on my end, and there might be more elegent way of doing things, but I took this opportunity to try different approach; fetch the data from Blender.
Blender Plugin
Blender is my software of choice to model stuff. It’s functionality can be extended through plugin using python and with some extra boilerplate for it’s editor integration. For mesh/geometry data access, BMesh Module can be used to extract the data and dumping it into a json file, so that Godot can read it at runtime.
I don’t know how most people workflow when make Blender plugin, but the editor’s IDE does not have any syntax checking, inline documentation and has unreliable auto-completion. Not sure if that’s the default experience or there’s plugin need to be enabled. I also found a fake-bpy-module where it can be used with external editor for a better workflow.
The plugin was set into this part/section, which pop in and out when pressing N key. Total of 300 line of code. The method itself is just couple of line, mostly the boilerplate to setup the UI which take a lot to understand and implement.
In Swift, there’s a Codable protocol where you can adopt for serialize type to and from any of the Swift built-in data format:
struct MeshData: Codable {
var vertices: [SIMD3<Float>]
var indices: [Int]
var connections: [Int]
var faces: [SIMD3<Int>]
// If variable name should be different from key
private enum CodingKeys: String, CodingKey {
case vertices
case indices
case connections
case faces
//case someExample = "some_example"
}
}
Then, decode the json file:
import Foundation
struct MeshDataLoader {
func load(_ meshName: String) -> MeshData? {
var data: MeshData? = nil
do {
let filePath = "MeshInfo.json"
let fileCont = try String(contentsOfFile: filePath, encoding: .utf8)
guard let fileData = fileCont.data(using: .utf8) else {
GD.printErr("Failed load: ", filePath)
return nil
}
let decoder = JSONDecoder()
let decoded = try decoder.decode([String: MeshData].self, from: fileData)
guard let meshData = decoded[meshName] else {
GD.printErr("Failed load: ", meshName)
return nil
}
data = meshData
} catch {
GD.printErr(error)
}
return data
}
}
With the PolygonPathfinder has been setup, intersection on the segments/edges of the polygon can be checked by passing the camera’s segment global position.
# --snip--
@onready var camera := $Start/Camera
@onready var start_segment := $Start
@onready var end_segment := $Start/End
func _physics_process(delta: float) -> void:
# --snip--
var min_point = Vector2(
start_segment.global_position.x,
start_segment.global_position.z
)
var max_point = Vector2(
end_segment.global_position.x,
end_segment.global_position.z
)
var intersections: PackedVector2Array = get_intersections(
from: min_point,
to: max_point
)
if intersections.is_empty():
camera.position.z = end.position.z
else:
# if intersections > 1
# Loop through array & set position to the closest
The code itself is self explainatory and it works fine and well
For the second gif, it does put the camera in front of the walls, but it still clipped a little into the wall and able to peak into the void. I will try to solve this and disscuss about it on the next post.
It would be nice if there is no need to use separate resource, and if it can be done through navigation related classes itself. Fortunately, there’s already a proposal to introduce a somewhat raycast feature.
The post is all over the place, but that’s it for this update
The camera should be positioned at the collision point when it move into the ground, same as when it move into the wall. At first, the idea was just to clamp the Y value higher than the position of the ground, so the camera won’t go into it at all. That means the ground would need to be always flat when the camera is positioned as a third person view following the player, but that would be boring and limited in term map/level design.
The solution would be similar to what has been done for the boundary detection, instead the camera’s segment will check collision with the ground mesh itself and repositioning the camera to the collision point if it move into the ground.
Geometry3D provides geometric operation for 3d such as segment_intersect_triangle to get the intersection point within 3 different vertex. The triangle faces of the ground is provided through the blender plugin, which then can be loop through to check if anything is collided with using the helper function. Optimization might needed if the map is too large by separating it into multiple small chunks.
Similar to the previous post, the camera can still peek through the ground when intersect at small angle. With SpringArm3D, it can be solved by supplying a Shape3D which will be cast on its Z-axis instead of ray cast. The usual choice would be SphereShape3D at a given radius which is fast to check collisions against.
For PolygonPathFinder, I assume it can be replicated by using get_closest_point within the edges of the polygon, which then a distance between it and the end point of camera segment can be check. For example, if the distance between the two points is lower than a minimum distance, the camera will retract toward the player position and if the distance is above the minimum distance it will do the opposite.
Tried to implement it, somewhat works but have some glitches here and there. There is also for the ground check which might be more complicated. Don’t have enough patience to make it works, so I drop it and use different approach.
Instead I duplicate the ground geometry, separate by selection, scale it down inside Blender and use it as a reference for the collision check. Given the offset between it and the real geometry, the camera no longer able to peek through the wall & ground
The downside of it is more manual work. Since the duplicated geometry need to be between the real one and navigation mesh, I also manually create the navigation mesh and import to godot using the _navmesh name suffix.
These approach works and maintainable on small map but might not work on more complex map/level. As the project goes, I’ll improve the workflow and approach to avoid edge cases that it might come with.
The pill shaped player now move relative to the third person camera. Currently, the player has a simple state machine with idle and move state. Acceleration and deceleration were also added and controlled inside the move state. The player controller now somewhat usable to move around these empty test map. One problem found where the player can stuck at low sharp angle within the NavigationMesh.
I was about to test peertube embedding, but it does not work. I guess the forum itself need to whitelist the domain under the allowed onebox iframes. I created a peertube channel at MakerTube to post short clip about the project, which then can be embedded here. Not today but surely tomorrow.
More Camera Work
Some code for the camera placement when collide with walls need some adjustment since it involve Y-axis where the intersection point does not have that information.
When view from the side, The y value of the intersected point is always zero regardless of the rotation of the camera around x-axis, represented by the blue dotted line. With some math operation, y value of the red dotted line can be found from the x or z value of the intersected point using the line equation y = mx + c.
func find_y_at(x: float, p0: Vector2, p1; Vector2) -> float:
var dx = p1.x - p0.x
var dy = p1.y - p0.y
var m = dy / dx
var c = p0.y - m * p0.x
return m * x + c
p0 and p1 are points that formed the red dotted line which is the camera vector. m is the slope and c is the Y Intercept. Another solution would be shoot up a vector/segment/ray from the intersected point and find the new intersection point on the red dotted line using a segment_intersects_segment method from the Geometry2D helper function.
That is all.
The camera implementation while not perfect is done for now. Basic player movement also has been implemented. With stock godot features, these can be done under a week or even a day
Hmm.. the update was late and keep getting shorter. Not much work done. I can see that future update will be less frequent too but hopefully will not stop