What is the most performant way to draw thousands of triangles in 2D
I’m trying to draw a hexagon tiled map. The tiles are 512x591(px), but this is negotiable. I started off by using tile map layers to do this, in hexagonal mode. Quickly found out that it’s slow as the number of tiles increase, and you need way too many tile textures of hexagons to deal with all the boundaries of tiling different biomes.
I moved on to using Polygon2D to draw each triangle of the hexagon, to allow for barycentric interpolation of boundaries. This failed since Polygon2D doesn’t support the set_instance_shader_parameter call for per-instance variables in the shader logic.
I finally moved to using an ArrayMesh which has worked wonderfully.
The key implementation detail is that I made each triangle its own unique mesh.
However, this also has poor performance. Drawing a 50x50 hex grid results in really poor performance, as well as a need to increase the shader buffer size.
Is there a better way of drawing all these triangles, each with their own shader
Is the issue the size of the hexagon at 512x591 pixels?
Is the issue the size of the hexagon+ zoom ? is there a way of reducing rendering complexity based on zoom?
Is there a better approach towards shaders, with a similar result?
I’m looking at these issues with an eye towards making a grid that is ~500x200 tiles, resulting in some 600k mesh triangles, to the 15k that the current 50x50 makes.
I am not experienced in rendering so I won’t be able to provide meaningful help
But in case it’s helpful, some things on the top off my head :
1. Is there a better way of drawing all these triangles, each with their own shader
Each triangle with their own shader will probably break any possibility for draw calls batching, you might prefer using a single shader for all the triangles. You could even consider having one big-a** mesh for everything but I’m not sure
2. Is the issue the size of the hexagon at 512x591 pixels?
I don’t see why it would be an issue
3. Is the issue the size of the hexagon+ zoom ? is there a way of reducing rendering complexity based on zoom?
Depending on your setup, you might want to occlude somehow the triangles that are not shown on-screen. Also, maybe you could have a simpler version of the map past a certain zoom-out threshold.
4. Is there a better approach towards shaders, with a similar result?
I would really investigate using a single shader (or even mesh) for everything.
If Godot profilers are not enough to catch the bottleneck, you could try to use Renderdoc to diagnose your issue. If you have bazillions of draw calls, you have your answer
A question on using a single mesh - my understanding is that the shader, and the parameters I give it via set_instance_shader_parameter is unique per mesh ? i.e, I wouldn’t be able to give unique info per triangle this way?
I have a feeling my understanding of this isn’t entirely right.
Are you procedurally generating these maps every time you play? Do they change? Is there a reason the hexgrid has to contain the visual data?
Based on what I’m seeing in your screenshot I would do one of two things:
Create a single image for the map, draw the hexes on top, use mipmaps to handle different zoom levels.
Procedurally generate the map while loading the game (with a suitable loading window telling the player you’re reticulating splines). Save it out as a single image file. Do number 1.
I think you are right : it would be per geometry instance, so for your whole mesh in this case.
You should probably not use a shader parameter in this case, but some additional data with the mesh : the vertex colors or some additional channel.
Unfortunately, I know some of the theory, but I have no idea how to do that. I have a vague idea of how to add those data to the ArrayMesh, but I don’t know at all how to retrieve those from the vertex/fragment shaders.
Some tips for using ArrayMesh (for using those data in the shaders, you’re on your own sorry) :
Otherwise, @dragonforge-dev idea of generating the map once could be valid. It will mostly simplify your runtime rendering implementation hurdle.
My initial thought is shaders. But I’m thinking probably one big shader that covers the screen and animates based on what’s beneath it based on color if it’s procedurally generated. I’ve personally never done a shader that complex, but it should be possible. It’s also possible to drop a few sprites on top strategically that animate either through multiple frames or using a shader.
You can see some examples of what’s possible using shaders to animate here: Godot 2D VFX Visual Shaders by Dragonforge Development Take the trees for example.
You can load them all up as a single atlas image, have different instances, each with its own tree, and one shader that runs on all of them. You can even randomizer the shader parameters so each one runs at a different speed, amount of sway, etc. But still only one shader. And as @midiphony said, you’d want to only run the shaders if they’re visible. Though Godot may already take care of that, I don’t know TBH.