I’m trying to make some sort of interactive birthday card, so the idea is to deliver this as an android app, and to be able to click 2D or 3D elements that pop up a dialogue or do some sort of animation, and later I would like to be able to pan a bit the background with the gyroscope.
At the same time I’m trying to learn, so I would like to know which are the best ways to solve this problems.
I know that one of the best ways to handle input it’s to go to Project Settings>Input Map, register your actions and then call them with input.get… , but there is no way to get touchscreen input like taps like this.
Other way I know of controlling input like the mouse, is using the signals on_mouse_entered() on_mouse_exited() and input_event() of a rigidbody, to only seek for input if the mouse is inside the object we want to select, but at first glance there is no other signal for touch controls.
I could try to have a script in unhandled_input() that on InputEventScreenTouch if event.is_pressed() (And other conditions to differ it from swipe) to make it cast a raycast to activate a method on the interactable, but I don’t know if that respects UI to handle input when an overlay is between, and if that’s the best way to handle this type of inputs.
this is very similar to mouse motion input which cant be assigned to an action. and since these finger events are relative and highly contextual you cant simply specify a particular action to any given single finger event. you would probable have a touch manager to judge based on relevant context that could determine the “action” and inject it int the event queue for another node to consume
func _on_rigid_body_3d_input_event(camera, event, position, normal, shape_idx):
if (event is InputEventScreenTouch):
For now, it seems to me as a nice solution, then I would have to make this logic a node or a class to have all interactive items inherit this.