Is there a "proper" way to pass information between shaders?

Godot Version

v4.3.stable.mono.official [77dcf97d8]

Question

I recently finished a proof of concept for a custom render mode that computes the average of overlapping rgb values, weighted by alpha values. (Check out this thread and this repository for more specific information.) It was a big learning experience, but as someone who is still new to shader code and Godot in general, it felt a bit hacky…

I’m left wondering if I’m missing some more efficient or more elegant solutions for my problem. In particular, what is the best way to pass information between shaders?

The linked solution above used multiple viewports which were constructed purely to have information encoded in their colors for use by other shaders. In theory, their roles could be replaced by a couple 2-dimensional float vectors, representing the values at each pixel on the screen. However, as far as I can tell, there’s no easy way to have a persistent variable like this that’s passed between shaders.

The alternative would be to create this variable in GDScript and use it to update a uniform in the shader, but I have to imagine this isn’t very efficient since this would require reading per-pixel information from the textures. It feels decidedly like a task for shaders.

Is there something obvious I’m missing here? Is the approach of encoding information for shaders in a separate viewport common, or is it frowned upon? In either case, why? Any advice from more experienced individuals would be greatly appreciated. I’m happy to elaborate on the issue as needed.

Have you looked into compute shaders? (Compositor is only available for 3d pipeline)

As well as understanding Godot’s render pipeline?

Thanks for the suggestion! I wasn’t aware that compute shaders existed, and after reading up on them for a bit, it does seem like it could be useful here. However, there are a few things I’m still uncertain on. Bare with me for a second here…

From my understanding so far, compute shaders are used to run arbitrary code (i.e., not necessarily related to graphics) in parallel on the GPU. They work best when given many small tasks that are independent from one another, especially if those tasks are not framerate dependent, so the GPU and CPU don’t have to be manually synced. However, this seems to present a problem with my project here because:

  1. The tasks I want to perform are dependent on one another. The output data that I want to obtain (image density and pre-multiplied alpha values) would have to be modified for every input texture.
  2. The tasks have to be completed before the render pipeline starts for every frame, requiring frequent synchronization between the CPU and GPU.

It doesn’t seem impossible by any means, but I’m not convinced it’s the best tool for the job. Of course, I could be misinterpreting the use case for these shaders.

I think it’s probably worth elaborating a bit on how I want things to work. Ideally, I would love a solution that was contained within a single shader script, with a variable tied to SCREEN_UV that could be modified by and persisted between all materials using the script. Something like this:

shader_type canvas_item;

// Magically persistent variables that are initialized each frame,
// are tied to SCREEN_UV, and can be modified between shaders
magic_var float density = 0.0;
magic_var vec4 colorSoFar = vec4(0,0,0,0);

void fragment() {

	// Compute color so far based on density map and current color
	colorSoFar.rgb = (colorSoFar.rgb * density + COLOR.rgb * COLOR.a) / (density + COLOR.a)

	// Update density map
	density += COLOR.a
	
	// Update alpha
	colorSoFar.a = 1.0 - (1.0-COLOR.a) * (1.0-colorSoFar.a)
        
	// Update Color
	COLOR = colorSoFar

}

Unfortunately, as far as I can tell, this “magic” variable type doesn’t exist for shaders. Fair enough. Surely there’s a way to emulate its behavior? Maybe compute shaders are the solution after all? I can envision a solution that looks something like this:

  • initialize compute shader (similar to above) with buffers for the density map, and colorSoFar 2D array, as well as the input texture and its relative position in pixels.
  • (Also the shader and buffers need to understand how many pixels the main viewport takes up since we can’t magically rely on SCREEN_UV anymore. This bit is still nebulous to me.)
  • for each texture:
    • pass the texture and its position to the compute shader
    • sync the compute shader
  • Use the colorSoFar buffer to write a texture which is displayed to the screen.

However, it still feels like a weird way to use the compute shader, since we have to sync the GPU and CPU for every single texture that’s part of the shader. Also, I’m not sure how to reliably obtain information about textures’ positions in pixels. (If my understanding is correct, the position variable on nodes cannot be used here, as it is somewhat independent from a viewport’s actual resolution.)

Oof. Sorry for rambling on about this. Hopefully that all makes sense. I’ve been fixating on this issue for a while now, and even though it’s little more than a theoretical exercise at this point, I’d still like to have it figured out. Basically, here are the questions I still have:

  • Am I understanding the role of compute shaders correctly?
  • Does the “magically persistent” variable that I’m dreaming of for basic shaders actually exist? If not, is there something close that I could use?
  • If I do end up needing to use compute shaders, how do I solve the problem of getting/setting data relative to textures’ pixel position?
  • Lastly, am I just way overthinking this? Maybe my original solution (encoding data in separate viewports to be sampled and passed to other shaders) is par for the course and not as hacky as it seems. I just don’t feel like I have a great frame of reference yet.

Any and all help is appreciated! I feel like I’m learning a lot here, and this community has been wonderful so far!

Not in Godot, the closest thing to getting data back from the GPU is with a compute shader.

I want to summarize your objective, but I don’t really understand.

How Godot 2d renderer works to my knowledge is there are two stages.

The first stage each canvas item is drawn in order. The second stage is a back buffer that can return the final image for post processing, kind of like a second pass at the cost of frame delay.

See here about screen reading shaders.

There are only uniforms to get custom values from CPU to GPU. But this is a one-way process as there is a cost to return data from a GPU.

Compute shaders can give you this iterative control. Or you can have work groups do one pass then pass it on to the next work group, in one go. kind of like the varying variable can pass values between individual process functions within a single shader. I have seen people use compute shaders to do ray traced light so syncing it should be doable.

Godot has recently added Compositor effects, but I haven’t really understood them, and I remember reading that it only supports 3d rendering.

Gotcha. Thanks for the confirmation.

I still wish there were a simpler way to solve my problem, but at the very least, I’m beginning to understand why there isn’t. I appreciate your input, and I’m marking the issue as solved for now. It was a good learning experience for me, and I feel like I have a better grasp on the applications and limitations of shaders in Godot.

You can pass data from a compute shader to other shaders without needing to sync to the CPU. This is what a Texture2DRD can be used for. Put a sampler2D uniform in a material’s shader, set it to a Texture2DRD in the shader parameters section, and assign it the RID of the texture your compute shader writes to, which works if the compute shader is on the main render thread.

The stuff about not running compute shaders on the main render thread is more about for when you’re doing longer-running compute-heavy tasks that aren’t immediately relevant to your next frame, but you can totally use them for real-time purposes, before the rest of the render pipeline.

This demo project is what I’ve been referencing when trying this stuff out. They’re using spatial shaders but it works with CanvasItem shaders too.

In any case, writing to viewports might be a bit of a clunky workflow but it’s a perfectly valid approach. Compute shaders come with their own issues, tbh, setting them up can be verbose and finicky with quite a bit of boilerplate.