No, multiplying by delta stabilizes framerate dependency but as you say the motion is not really intuitively predictable. However, there are cases where lerping may be preferred feel-wise, especially for directly animating positions. Not saying this is that case though.
Just for fun, here’s a benchmark:
func _ready():
var fps = 30
while fps < 1000:
var output ="%3d:\t"%fps
for speed in range(1, 100, 10):
var t = 0.0
var dt = 1.0 / fps
var foo = 1.0;
while not is_zero_approx(foo):
foo = clamp(lerp(foo, 0.0, speed * dt), 0.0, 1.0)
t += dt
output += "\t%f"%t
print(output)
fps *= 2
And here’s the output that measures the convergence time from 1.0 to 0.0. Each column is for different convergence rate (in increasing order):
30: 11.333333 0.866667 0.333333 0.033333 0.033333 0.033333 0.033333 0.033333 0.033333 0.033333
60: 11.433333 0.950000 0.450000 0.266667 0.183333 0.116667 0.016667 0.016667 0.016667 0.016667
120: 11.466667 1.000000 0.500000 0.325000 0.233333 0.175000 0.141667 0.108333 0.091667 0.075000
240: 11.491667 1.025000 0.525000 0.350000 0.258333 0.204167 0.166667 0.137500 0.116667 0.104167
480: 11.502083 1.035417 0.537500 0.360417 0.268750 0.214583 0.177083 0.150000 0.131250 0.114583
960: 11.507292 1.040625 0.542708 0.365625 0.275000 0.219792 0.183333 0.156250 0.136458 0.120833
It’s noticeable that on slower convergence rates, the difference in total time is almost negligible across a wide fps range. As the convergence rate grows the difference becomes more significant (in relative terms in respect to total time).
It’s also worth mentioning that lerp used like this can potentially overshoot if you’re not careful and let the parameter go beyond 1. So clamping the parameter or the result is a wise thing to do.