Topic was automatically imported from the old Question2Answer platform.
Asked By
Mr Henri
So, I’m creating a special shader to render some objects using raymarching!
I succeded to render everything normally on a cube (MeshInstance),
however, the camera inside the shader used a different transform than the one in the actual scene.
In order to fully render, let’s say, a sphere inside the cube I’d need to have the camera position (which I have) and the fragment’s position in the world (similarly to how SCREEN_UV gives a vec2 for the fragment’s position on screen, I want a vec3 for the fragment’s position in the world):
Also, if possible as well, the fragment’s position related to the mesh itself (e.g., how UV is relative to the node and SCREEN_UV is relative to the screen).
You can pass the (local) VERTEX from the vertex shader to the fragment shader by using a varying variable. Such variables will be interpolated between the corners of a face to match the pixel. Probably, there’s also a way to calculate the local VERTEX inside the fragment shader but I don’t see a convenient way (like i.e. an INV_MODELVIEW_MATRIX)
// from camera space to world space
vec3 VERTEX_WORLD = CAMERA_MATRIX * vec4(VERTEX, 1.0);
// from world space to object space
vec3 VERTEX_LOCAL = inverse(WORLD_MATRIX) * vec4(VERTEX_LOCAL, 1.0);
edit:
combine them for efficiency
// pixel's position in object space
VERTEX = inverse(WORLD_MATRIX) * CAMERA_MATRIX * vec4(VERTEX, 1.0);