How to map texels from baked lightdata to mesh vertices

Godot Version

Godot 4.2 .NET

Question

Hi all,

I’m experimenting with a niche use case of the light mapper to populate positions for particles to spawn in the editor not playtime.

The idea being that if you can get the luminance of a pixel, you’d be able to using a threshold to determine the likelihood of a particle to spawn in that location.

Here’s where my problem comes in. I’m unable to match the Texel to it’s position in World Space. My current plan is to check every Texel against every mesh triangle and then use the triangles position as a spawn point. However, I can’t figure out how to do this in C#.

I get that this would terrible for performance but as I’m pre-baking the data before playtime I’m not too worried about the performance hit.

Any help would be greatly appreciated!

Thanks

Got a little bit further. You can get the indices, vertex and uv’s via this code:

					Array meshArray = mesh.SurfaceGetArrays(i);
					Godot.Collections.Array indices = (Godot.Collections.Array)meshArray[(int)Mesh.ArrayType.Index];
					Godot.Collections.Array vertices = (Godot.Collections.Array)meshArray[(int)Mesh.ArrayType.Vertex];
					Godot.Collections.Array uvs = (Godot.Collections.Array)meshArray[(int)Mesh.ArrayType.TexUV2];


I would setup a tool that would use raycasts to the surfaces you want to possibly place these particle effects, then get the object it hits and the index of the face it hits, then through that find the vert indices of that face it hits, then you can find the UV coordinates for those vertices, and so on.

It might just be easier to do some render to texture type deal where you capture the lighting debug mode from above, in sections, of only the meshes or layers you want to try and find out the surface luminance from, then you can process the resulting texture capture for that threshold of brightness, then do the math to trace that pixel position to world coordinates. For that the X/Y shouldn’t be too hard as if you use an orthographic camera straight down, you’d just need to know it’s size, it’s position and then the resolution of the image in pixels, and do the math. Do that across your game in some tool process and it could work.

1 Like

Thanks for the reply! You’ve given me some stuff to think about.

I’d assume the viewport route would use get_editor_viewport_3d but the limitations would be setting up the orthographic camera to fit within the constraints. A way of getting it could be getting the aabb and dynamically setting the camera position, same way you’d figure out the raycasting area.

I’ll give those methods a shot. Once again thanks for the help! :smiley:

Yeah, you could also make a scene for this that includes a SubViewport with it’s own Camera as a child, then a tool script that would import the scene then make that SubViewport Camera current, do the capture, do the math to place your particles, then reset the camera remove the scene.

Unless your world/level is humongous, you can probably just do a single capture, even at a lower resolution, it would be pretty useful and quicker, I would just make sure to set the layers on the floor meshes so they are the only things included in the capture.

If it’s a huge world/level, you can process this whole thing in a grid if like you said get the aabb of the entirety of the surfaces you want to capture, then like move the scene, capture, process and store, then move to the next grid space, etc.

1 Like

It worked a treat. I already have chunking setup so taking it to the next level should be pretty easy.

This is a lot easier then trying to map texels!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.