Is it okay to normalize a Vector multiple times?

In my project, I need to pass around lots of Vectors (both Vector2 and Vector3). Sometimes, I would like to guarantee that a vector is normalized, even though it might have already been normalized at some point before.

My question is – Should I worry about impact to performance of normalizing the vector again? Is it worth checking if the vector is normalized before trying to normalize it? Or does the engine already handle that for me (checking before normalizing)?

Unless you’re doing it hundreds of times per frame, I don’t think it will impact the performance in any way

If the vector needs to be normalized it will either be normalized by the engine method or an error will be thrown, so you can easily check which it is.

If you need to check whether its normalized or not just the the is_normalized method.

1 Like

Normalizing isn’t terrible, but it’s slightly expensive because of the square root. The way you typically normalize a vector is get the length, then divide by it:

# I just handrolled this, use the builtin, not this.

func vec_normal(v: Vector2) -> Vector2:
    var len: float = sqrtf((v.x * v.x) + (v.y * v.y))
    if is_zero_approx(len): # BAD!
        return Vector2.UP # Return some default normalized vector.
    return v / len

So, in order of cost, we’ve got a square root, a divide, two multiplies and an add, along with a branch, plus function setup and teardown. Not terrible, but that square root in particular can start to add up if you’re doing a lot of these, as can the divide.

I haven’t looked at the Godot version of is_normalized(), but in theory it could be cheaper than doing a full normalize. You need to check that the length is 1.0, but you can do that using the squared length of the vector since 1.0^2 == 1.0. You’d need a fudge because of float roundoff, but you could do something like:

# Again, handrolled, use the real builtins, not this...

func vec_is_normal(v: Vector2) -> bool:
    var lensquared = (v.x * v.x) + (v.y * v.y)
    if is_equal_approx(lensquared, 1.0): # Might not have enough tolerance...
        return true
    return false

If Godot does that, then checking for normalizing would be significantly cheaper than normalizing, since it lacks the square root and the divide. It’s still more expensive than not needing to check, though, so if you can set your code up so you know whether things are normalized or not at any given stage without checking, you’ll get rid of some performance drags.

This is where the Haskell and Rust people will lean in and say that if only we could use the type system to differentiate between normalized and unspecified vectors, we could have the interpreter enforce this for us…

That said, you can spend a lot of time optimizing things that turn out not to matter. I’d suggest profiling and testing before doing anything drastic. It’s easy to spend a week optimizing something that winds up speeding the game up by a fraction of a frame per second that the players won’t even see.