I am making a tiny city builder where you can place buildings by either holding the left mouse button and dragging with a mouse, or dragging your finger on a touchscreen.
Additionally, you can pan the camera by either holding the right mouse button and dragging with a mouse, or dragging two fingers on a touchscreen.
The obvious way to handle this is to have a bunch of elif statements that check for the mouse or corresponding touchscreen event respectively. What I am wondering is if there is a smarter way to map the respective mouse input to a touchscreen one in order to make the code more readable.
func _input(event):
if event is InputEventScreenTouch:
if event.is_pressed():
print(event.position)
if event is InputEventScreenDrag:
print(event.position)
print(event.speed)
print(event.index)
print(event.relative)
if event is InputEventMouseMotion:
print(event.relative)
# etc ....
# etc ....
I’m not aware of any configuration options there. I don’t think it does more than map press/release and motion events one way or the other. My personal experience so far is that I wanted to make the distinction anyway, because I would have to treat modalities in different ways.
In a PC, you can’t have more than one mouse input.
But you can have multiple screen drag signals at the same time; that’s why you have event.index You could try using emulate mouse from touch but that’s going to lead to
a bug (I don’t remember the exact details) that’s very specific to using emulate mouse from touch or emulate touch from mouse because a mouse doesn’t disappear and reappear in a PC, but that is what happens every time you touch
I have an app I made that works on Android using Emulate Mouse From Touch. I have had no problems with it. However, my app only uses left-click. It’s also worth noting that Emulate Mouse From TouchONLY works for Android, not iPhone. (If that’s important to you.)
If you’re using gestures, I recommend coding for the gestures. As far as simplifying your code, I would do something like this:
func _input(event: InputEvent) -> void:
if event is InputEventMouse:
print("Handle mouse input")
elif event is InputEventScreenTouch or event is InputEventGesture or event is InputEventScreenDrag:
print("Handle touch screen input")
If you need more granularity, keep in mind that there are 8 levels of event processing that happen in order, and the event will be picked up by the first one to grab it, and nothing further will process it. Once in a while there are event processing gotchas. Furthermore, your input processing does not need to be in any single file. You can put the input processing wherever you want. As long as that node is in the tree, it will process events. For example for zooming, you can put the dragging with two fingers (and the right-mouse drag) in a camera.gd script on the camera object itself. This will encapsulate your code and make it easier to maintain and debug later. It will also mean you don’t need a crapload of if-else statements.
Here’s an example I use to rotate objects on screen. I added a drag event as an example, but as I use the Right-click is the same as drag option, I don’t have code for that.
var is_dragging = false
var mouse_offset
func _input(event: InputEvent) -> void:
if event is InputEventMouseButton and event.button_index == MOUSE_BUTTON_RIGHT:
if event.is_pressed():
if get_visible_rect().has_point(event.position):
is_dragging = true
mouse_offset = event.position
else:
is_dragging = false
if event is InputEventMouseMotion and is_dragging:
position += Vector2i(event.position - mouse_offset)
if event is InputEventScreenDrag:
print("Do drag code here")
I also recommend downloading Android Studio. It’s free, and with it you can create emulators in Android Studio that will allow you to test out your app, and I believe you can even test gestures with it. This would allow you to try something, export an APK, load it, and try it. Setting up the emulation may require you to tinker in your BIOS, but it’s worth it. I can make a code change and be testing it in the emulator in less than a minute.