I’m working on a shader that requires access to triangle data. The shader does a computation on every edge in a scene, and compares normals of triangles sharing the edge (if the edge has more than 1 triangle associated with it).
I was reading about Compute Shaders and Compositor API, and it seems to be the way to go, because it doesn’t seem possible to read this in Vertex or Fragment shader, and Godot doesn’t support Geometry Shaders. I want to run this shader inside the graphic pipeline, because I want to be sure that every frame has this shader applied, and if I understand correctly, the Compute Shader API runs independently of the graphic pipeline.
Is it possible to access the vertex buffer (or the triangle data as I mentioned above) from a Compositor Shader ?
If yes, which rendering pass is the best for this kind of computation (I’m referring to EffectCallbackType)?
Is it possible to access the vertex buffer (or the triangle data as I mentioned above) from a Compositor Shader ?
What vertex buffer? There is no global vertex buffer with all the verticies of the scene. These buffers are split by draw-calls. Every instance (when not batched) has a seperate draw-call.
You can access all individual v-buffers of all scene nodes and merge them in a single giant buffer.
And yes you can provide those buffers to a compute-shader via a uniform.
If yes, which rendering pass is the best for this kind of computation (I’m referring to EffectCallbackType)?
For this you have to descripe you algorithm more deeply. Compare every edge is very vague. Is it only in screenspace per pixel? or is it the whole world? What must be computed with what?
I think for the shader to work correctly I’ll need access to triangles of every mesh that is visible to the camera.
Description of algorithm and my thought process behind it:
mesh is made out of triangles
every triangle of the mesh is facing the camera, and is visible, or it’s nor facing the camera and it’s nor visible (culling), there are no “partially facing the camera” triangles (at least from facing the camera perspective, I know about triangles overlapping and Z-buffer, but I’m sure if I solve the rest, I’ll be also able to solve the Z-buffer issues)
therefore to render a silhouette of an object we can go through all triangles of the mesh, for each triangle go through every edge for that triangle, detect adjacent triangles of that edge, calculate normal vectors of adjacent triangles, compare to the vector of the camera and see if it’s rendered of not (I’m assuming that edge can have maximum of 2 adjacent triangles, which should be true for most game object I can imagine)
if
a) edge have one triangle rendered and other triangle not rendered, it’s a silhouette edge
b) edge have one triangle render, and only 1 triangle adjacent, it’s a silhouette edge
store the silhouette edges in a buffer, and use that buffer to draw the silhouettes of all rendered objects
I don’t have a clear idea yet of how to draw this, but once I have vertices to highlight, I can
a) project them onto some plane, and then draw lines on that plane,
b) use some kind of canvas overlay or
c) use fragment shader to detect if the fragment is on the highlighted edge and paint it with highlight color
Hi.
If your only goal is to draw a silhouette your aproach seems to be a massive overload. Maybe i dont get it what your effect should look like but there are other ways to draw outlines.
However my goal is to do more than just silhouettes. I also want to highlight certain creases and contours of all the objects in the scene, depending on the angle between adjacent triangles. I’ve already tried multiple methods for highlighting outlines. Post-processing based methods seem to be working better than others, but I didn’t manage to tune them to the point I’d be satisfied with the result (I’ve tried various combinations of depth and normal textures as source of discontinuity). I can imagine a post-processing method with custom discontinuity source, that I’d create a special unique texture for each object in the scene and detect discontinuity in it, but it seems to me easier to write shader I mentioned above, as there may be quite a lot of work in making additional discontinuity texture for all my game object.
I’ve tried all methods mentioned in 5 ways to draw an outline and Unity Outline Shader Tutorial - Roystan , but I wasn’t able to tune the combination of depth and normal textures to highlight small details in object far away from the camera and at the same time highlight the objects near the camera consistently.
Sounds a bit like a cavity map. Which is static for a single object, therefore should be prerendered.
Wouldn’t it be more efficient to write a cavity map renderer to automate this process than calculating it 60 times a second?
By discontinuity I mean what is picked by the Sobel operator when you run in on the 2D signal, boundaries in normal or depth textures.
I mean to create an additional texture for each game object, where I’d separate different regions of the object by color, and then use edge detection algo to detect borders between these regions. Also each object would have a unique set of colors, so I’d get the outline done as well.
My current post-processing approach is using discontinuity in depth texture modulated by normal texture, and discontinuity in normal texture to find what to highlight, but this doesn’t seem to work well for certain contours and creases, and doesn’t seem to work consistently for the same object depending on distance from camera.
Yes, a cavity map would work for the static objects, but my objects are animated, so the angle between the triangles change depending on the frame of animation (and the animation itself is continuous, so it’s calculated for each frame, not pre-rendered)
Also the outlines change depending on the angle of the camera.
I’m not sure about the performance of my idea, but I’d like to try it and see if I can tune it to what I want to do at reasonable performance.
This is more of an exploration really, when I have a version of algo running on the GPU I’m definitely going to optimize to do as much of the work in pre-calculations (maybe at asset import time) as possible.
Okay, i think i have a glimps of what you try to achive.
I would still say that your first approach to provide all the vertex data to your shader would kill your performance massively. Also if you use bone deformation the vertex position wouldn’t be correct since deformation runs in a shader and not on the vertexbuffer itself. Simulating deformations by script would totaly dump your fps.
Let’s try a different approach: If you would write a fragment shader which information would you need in detail to get your result per pixel when rastering?
Maybe an extra render pass can provid you with all the informations you need.
OpenGL, Vulkan, and DirectX have Geometry Shaders, so the underlying API of Forward+ should be capable of doing this calculator with reasonable cost.
As mentioned in this GitHub issue, Godot doesn’t plan to add support for GeometryShaders, and suggests to ComputeShaders instead. My current understanding is that to run ComputeShaders inside Rendering Pipeline I have to use Compositor API, which leads to my initial question of: How do I access geometry data inside of ComputeShader inside of Compositor (or Rendering Pipeline) ?
The GPU already does the calculation of all the normals of all the visible triangles and pixels on the screen, so my estimation is that this wouldn’t take more time than rendering the same scene twice.
I found this method, and I’m trying to understand what exactly is in these buffers, and if I can somehow get the geometry data from there ?
As I mentioned above, I don’t know the performance implication yet, but I’d like to actually try it and measure it to decide if such shader is viable for my use case.
That’s a very good point. I didn’t know how the animation affected vertex buffers. And I do agree that simulating deformations seems like an overkill. Would it be possible to access the geometry data after the deformations are performed ?
One idea I had is to use a custom discontinuity texture - essentially paint regions I want to separate with highlight lines with different colors, and then detect boundaries between these regions with edge detection algo.
I guess since there are around 16M colors it would be sufficient for all game objects I’d potentially have in a scene (or in a game even). It’s a good suggestion. I’m going to also explore the idea of computing the custom discontinuity texture at asset import time and using it as an input to the Fragment shader.
Related question: If I run Fragment Shader as 2nd pass on ShaderMaterial, is it somehow possible to know if it’s an “edge fragment” ? Meaning that there is no adjacent fragment belonging to the same geometry on the sides of this fragment ? (this however won’t solve the cavity issue, but I guess I could precompute additional cavity texture as process it inside of FragmentShader to highlight cavities as a solution for static object, still no idea how would I do this for animations)
I’ve managed to find this, which is vaguely similar to what I’m trying to achieve, and it seems like they were able to solve it to some degree. So I’m pretty sure there is a way of doing it. The main question is if there is a performant-enough way of running this kind of shader in the context of Godot engine.
Just to add more context and clarify my intentions: I’m working on a shader that highlight certain edges in the mesh based on 2 criteria:
edge have only 1 visible adjacent triangle (the other triangle is either invisible because of culling, or non existent) - this is similar to Outline/Silhouette/Contour of an object
angle of adjacent triangles in a specified range - this is possibly similar to cavity map as you mentioned above
If you modify your mesh(or proxy mesh) so that all edges are smoothed a discontinued edge must point “away” at least 90° off. Therefor a fragment which normal points at least 0.5 away on x or y (screenspace) must be an outer edge. Am i right?
The cavity of the rasterpoint could be determent by the offset from the surface normal to the triangles tangent. If both equal this point is on a flat surface. If its flater to the triangles center its more in a cavity if its points away its sticking out. I’m currently not shure how to get the face center or if the binormal would help when ditermine if the normal points torwards or away.