Confused about UV coordinate transforms in 3D

Godot Version



Hi! I’m new to shaders, and I’ve been trying to write a spatial shader that will draw an outline for Sprite3Ds (both normal and billboard). I’ve managed to make most of it work, but then I decided to add margins so that the outline wouldn’t get cut off if the texture doesn’t have space around it. A guide for 2D suggests that I should enlarge the texture (in “vertex”) and then shrink it again (in “fragment”). However, in 3D some matrix transforms clearly happen in between, so everything gets skewed. I’ve read the docs and found this handy cheat sheet for matrix transforms, but it didn’t quite clear my confusion. I guess my main questions are:

  1. How can I find out which values are dealt with in which space? So far it seems to me that “vertex” gets both VERTEX and UV in model space, but then “fragment” deals with both VERTEX and UV in view space (or is it screen space)? How does POSITION help if (according to the docs) the data still gets transferred to “fragment” in the same way?
  2. Which transforms does the engine perform on its own? Is it MODELVIEW between “vertex” and “fragment” and then PROJECTION after “fragment”? How can I control that better? “Skip vertex transform” seems to be a way to override that, but it doesn’t work as I’d expect on its own; should I use it and then somehow apply MODELVIEW in “fragment” after upscaling? (and then ummm inverse projection for billboard??) What about “world_vertex_coords”: does that flag affect all stages, or only “vertex”?
  3. Where do I get missing arguments for the matrix conversions? “Get view from UV” seems to require depth; is that VERTEX.z in view space?
  4. Am I even approaching this right, or is downscaling/upscaling something that only works properly in 2D? Am I missing some crucial step like normals?

Sorry if these questions are both basic and numerous; I tried asking on Reddit a few times but got no reaction. I’d just like to try and really understand what I’m doing there, because this is probably not the last shader I’ll have to make for similar conditions. I’d appreciate if you looked at the pastebin though; the code currently works if you set “margin” to “false”. Thanks in advance!

I think you are confusing two similar, but separate concepts.
Both the screen and each polygon in a scene have UVs, because the screen is just two triangles at the end of the day. You may have been reading about screen/viewport coordinartes and it’s UVs as if it’s a polygon, which is correct, but not quite the same as the UVs of a mesh in the scene. You are drawing TO the viewport, after all.
The way a pixel shader works is it scans every pixel (actually, fragment) in the display and for each one it fetches the appropriate color by computing the transforms of meshes and the scene etc. Basically, you can think about it like it’s shooting a ray from the camera plane towards the scene, and the first thing it hits, that’s what it renders (it’s a simplification).
When if hits a face, it finds WHERE on that face it hit, that’s the UV. This coordinate is then passed to the UV map to get the color from the texel (texture pixel, usually) that corresponds to that UV. This is the albedo color that will be used, normally. It can be modified by lights and reflections, etc, of course.
The point is the UVs of a face in the scene map to a texture. The mesh gets transformed by the model and view matrices. The final fragment on the vieport can then further be modified by treating it like a face itself. Usually you’re not gonna touch the vertex shader for normal rendering. That’s only if you need to add or modify vertex data (data about a vertex, like color) while rendering.

I hope this helps clear a bit of the confusion. It’s not easy to visualize all that goes on in a shader, but it’s easier if you understand each independent step as a single thing. And remember it all happens one per pixel (actually, per fragment, which can be more or less pixels depending on the device, but on regular screens, usually a pixel is a fragment).

Thank you for your response! I wasn’t confusing SCREEN_UV with UV, but your explanation helped me re-evaluate my assumptions once again. Turns out, the single notion that confused me was from the docs, where it stated that “fragment” received vertex data “in view space” and then received the UV “from the vertex function”. This led me to believe that the UV in “fragment” somehow gets converted into view space, too. Now that I’ve discarded that assumption, I was able to find the real mistake in my code which was far more mundane (applying the scale coefficient twice). The shader is still quirky, but I think I’ll be able to tweak it now that I don’t have to manually deal with transforms.

For future search results, the offending line was " vec2 shifted_uv = uv - line_size;" (should’ve been “line_thickness”). Thanks again!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.