So I have an AudioStreamManager currently that instantiates up to X audio streams to handle concurrent sounds. Works fine, but - no positional audio, no use of different buses for different effects when bullet goes through/explosion happens in a specific area (such as inside a time dilation circle, water, etc.).
My game is top-down Vampire Survivors clone. Unlike VS, I’m thinking it would be maybe cool to have 2D positional audio + area-based effects as described above. Plus I see doppler effect is built-in (could be cool for a fast laser projectile flying away), but that requires AudioStreamPlayer to be attached to the node flying away/towards player (I assume?).
So my questions are:
Do I need to attach a player to each projectile and enemy for positional audio? (I’m assuming yes).
When do I use Polyphonic steam vs when do I not? Is it lighter than spawning multiple players? (I’m assuming I only need this for nodes that can have simultaneous sounds, such as Enemy node b/c it can be hit by multiple projectiles around the same time, so the generic EnemyHit sound needs to be played multiple times)
General advise on architecture for my use case. I’m guessing a singleton to store system-wide volume + provide pool of players for non-positional audio, plus an audioplayer 2D for each projectile/enemy, and that player can be normal stream or polyphonic depending on possible # of sounds. Is that right, or is that too heavy and there’s something simpler?
Without positional audio and area bus overrides it seems very easy, just one manager of pool of players like I have now works fine. But I’d rather have better audio if it’s not that hard to implement.
Thank you! Tagging @mrcdk coz I think you may have good advice here, ty ser =)
Plus I see doppler effect is built-in (could be cool for a fast laser projectile flying away), but that requires AudioStreamPlayer to be attached to the node flying away/towards player (I assume?).
AFAIK the 2D audio system has no Doppler support. Doppler is only implemented in 3D.
Do I need to attach a player to each projectile and enemy for positional audio? (I’m assuming yes).
It depends, if it’s a fire and forget sound then you could use the AudioStreamManager, pass it the position and spawn a AudioStreamPlayer2D, say for example, an explosion.
If you need it to follow the object then, yeah, attaching it to the object itself is the simplest way.
If you still want to keep using the AudioStreamManager to centralize everything you could use a RemoteTransform2D node and attach it to the AudioStreamPlayer2D node you create in the AudioStreamManager scene.
When do I use Polyphonic steam vs when do I not? Is it lighter than spawning multiple players? (I’m assuming I only need this for nodes that can have simultaneous sounds, such as Enemy node b/c it can be hit by multiple projectiles around the same time, so the generic EnemyHit sound needs to be played multiple times)
I generally use an AudioStreamPolyphonic when I have a library of sounds that can play simultaneously and I don’t feel like managing multiple AudioStreamPlayers and manage their lifetime or the sound lifetime. Just for fire and forget type of sounds. I don’t know if it’s lighter or not, though.
The AudioStreamPlayer itself already has a AudioStreamPlayer.max_polyphony property so, maybe for your example I’d just use a AudioStreamRandomizer with the multiple hit sounds and set the max_polyphony to some number.
General advise on architecture for my use case. I’m guessing a singleton to store system-wide volume + provide pool of players for non-positional audio, plus an audioplayer 2D for each projectile/enemy, and that player can be normal stream or polyphonic depending on possible # of sounds. Is that right, or is that too heavy and there’s something simpler?
Yeah, sounds about right. Just be mindful that when freeing a node all its children will also be freed so if you have an audio playing at that moment the audio will stop. You should wait until the audio is finished with the AudioStreamPlayer.finished signal before freeing the node.
(If you use an AudioStreamPolyphonic it won’t trigger the finished signal as the AudioStreamPlayer doesn’t know that the audio stream has “finished” you’ll need to check with the AudioStreamPolyphonicPlayback.is_stream_playing())
Cool! The docs say that RemoteTransform2D pushes its Transform2D onto another node, so wouldn’t I need to attack it to the projectile/enemy and tell it to use an AudioStreamPlayer2D as the remote node? I can test quickly but thought maybe you know off the top of your head.
Yes, you attach the RemoteTransform2D to the enemy/bullet and set the RemoteTransform2D.remote_path to the path to the AudioStreamPlayer2D relative to the RemoteTransform2D with Node.get_path_to()