This is not a simple thing, so lets start with the three things that are the TLDR and then go into some detail.
-
When you add a RigidBody as a child, you have to keep in mind that the parent node, isn’t governing the movement of the RigidBody. That is controlled by the physics engine which will enact gravity on it, making the rigidbody just fall down. Yes the parent moving will have an effect, but mostly the effect will be the physics engine fighting against that movement.
-
You get the position of the hands simply by accessing the
global_transform
on theXRController3D
node. There are more complex ways to get to the internal data directly, but just reading out the transform on the controller node is easiest. -
Godot XR tools has an implementation that you can look at: godot-xr-tools/addons/godot-xr-tools/hands/scenes/collision at master · GodotVR/godot-xr-tools · GitHub
These nodes can be added as a child of the XRController3D node and they will be positioned correctly keeping collisions in mind. There are some demos in XR tools that show how it works.
Then for the longer answer, there are a couple of problems we’re dealing with here. The main one is that as you move your hand, and the tracking systems updates the position of the node, that change of position happens outside of physics. It’s akin to teleporting from the old location to the new location. You are not actually moving the node along a path checking for collisions, no if you child a physics body to your hand, you’re just teleporting it inside the wall and the physics engine panics.
So the way to approach this is that you need a physics node that follows the hand around and properly checks for collisions along that path. In the XR tools example we do this by childing a physics body to the hand but setting the property top_level
to true. This means that the parent is ignored when positioning this node. Now we can move this node from it’s current location, to the location of its parent.
How this is implemented than differs from the type of physics body you use and so far people are still experimenting with the different options and trying to find out which works best.
My experience is as follows:
-
You can use a RigidBody3D node and at face value that seems to be the best choice as it does all the right things collision wise. But there is no mechanism to tell it to move to a new location. The best way I’ve found is to implement a custom integrator that performs the follow logic. That works pretty decent but I have found it can do weird things on certain collisions.
-
You can use a CharacterBody3D node and this gives you access to
move_and_collide
andmove_and_slide
as methods to move. This works recently well from a POV of stopping the hand from going through walls and tables, but I’ve found the interaction with RigidBodies to be immensely unstable, even being able to push objects through floors. Also the collisions are one directional, which is a problem. -
Finally, what we do in XR Tools, is use AnimatableBody3D and then use
move_and_collide
to implement our own collision code and apply forces to any RigidBody3D we collide with. This has by far proven the most stable, but we lack the ability to properly handle collisions while rotating the hand.
All in all, my conclusion at this moment is that the physics engine in Godot, and to the best of my knowledge many other physics engines, lack the ability to do this perfectly and it’s always a best effort.
One last thing, I have heard others have had success using SpringArm3D or roll their own raycast based solution. But I have no examples to point at here.