Blend multiple textures using shaders

Godot Version

4.2.2

Question

I am trying to blend multiple textures by averaging the rgb values (Lerp?) and getting the product of the inverse alpha values at every pixel where they coincide. However, the default behavior of coinciding textures is for them to overlap, like so:
image

Instead, I would like the overlapping regions to blend together, as described above. Basically, I have two questions regarding this:

  1. Is there a simple way in the editor to accomplish this task? I.e., can I use something like a CanvasGroup to blend all the child nodes?
  2. Regardless of the answer to question 1, how would I implement a custom shader to perform this? Iā€™m trying to get a better grasp on shaders and am not sure how to approach this problem. To be clear, the number of textures that need to blended can change dynamically during runtime, and I am not sure how to pass this information to the shader.

Conceptually, this seems like a simple task, but Iā€™m having a really tough time implementing it. Any advice or additional resources would be appreciated.

make use of canvasitem material


change the Blend Mode

This is definitely a step in the right direction, but itā€™s still not the desired effect. Using the initial example, I would expect the region where all three squares overlap to be ā€œmore blueā€ than regions where two of the squares overlap. Instead, this is the result with additive blending:
image

Itā€™s close, but itā€™s simply adding the color values, not averaging them.

Additionally, this method has the problem that all objects behind these will contribute to the additive mixing. E.g., if there is a white background behind the squares, they all become white, and I canā€™t find an obvious way to prevent this.

So take this with a grain of saltā€¦ Iā€™m just getting started with shaders too:

I think I know what steps you would have to take in your shader to get it to do what you want, but I donā€™t have the skills to actually implement/test my hypothesis.

Iā€™d start by setting up my textures as children of a canvas group node with tweaks set to 0, which is mostly just to get a single texture to write to thatā€™s the correct size to hold your other textures.

Iā€™d make a shader on the canvas group, and use uniform 2dsamplers to feed in the textures from the children, and uniform vec2s to get their positions into the shader.

hereā€™s where it get a bit fuzzy: you sample all the textures in your fragment function, offsetting the sampled UV by the textureā€™s position. The math will be tricky because everything will be normalized, but I think itā€™s possible.

anyway, the last bit is easy: once you have the vec3s you just add them together and divide by the number of samples

what would be AWESOME is if you could customize render modes, I could really use that, too.

Seems to me youā€™re only blending the green square with the other two? For getting the three squares to mix properly they all have to have the blend mode in CanvasItem. The other two are just overlapping without mixing.

Also, quite possible I misread your intentions.

I think the op is looking for a blend mode outside of the supported ā€˜addā€™, ā€˜subtractā€™ and ā€˜multiplyā€™. If thatā€™s the case they need more bespoke shader magic. I feel their pain, Iā€™m working on a similar problem where i need hard light, and getting multiple textures to play nice together is not coming easy.

1 Like

is this 2D or 3D?

Iā€™m doing this all in 2D

Thank you for the thorough reply. Iā€™m still having trouble implementing your suggestions, but youā€™ve at least made me think a bit more about identifying the problems with my approach.

Namely, how do you convert an existing node to a texture for use in the shader? In my example with the squares, theyā€™re simply ColorRect nodes, and I assumed there would be a way to directly sample them in the shader. However, this doesnā€™t seem to be the case.

Eventually, my implementation will use dynamic Line2D nodes, so caching static textures isnā€™t really an option. It looks like SubViewports can be used to get around this, but this feels very hacky and inefficient, as I could potentially have 100s of Line2D nodes that would need to be sampled individually.

Iā€™m beginning to think that I may be approaching this problem incorrectly. If Iā€™m using Line2D nodes, perhaps a shader interpolating between all of them just isnā€™t an option and Iā€™m tethered to the blend mode on canvas materials.

Of course, Iā€™d love to be wrong here. If anyone has any additional insight into this problem, Iā€™m all ears.

if these are different objects that must interact, you are limited by render order.
your objects render one at a time. with a shader you get access to the result of the frame and can obtain the color, and then write to it. and the next sprite will get the result of this.

Yeah, this is a doozy of a problem and Iā€™m not sure what you can do. I know Iā€™m the blind leading the blind here, but if you could make multiple back buffer copies (one for each node) and reference their textures in a single shader that might workā€¦ but I donā€™t know if thatā€™s even possible.

Okay, totally just spitballing here (I hope thatā€™s okay?):
What if you (somehow?) sampled the screen and for each pixel you got a count of how many line 2ds were drawing at that point. Thatā€™s just a big array (or dynamically generated texture) you could pass into a shaders on each line (probably on a back buffer copy on each line, actually so that all the shaders and the array were working in the same normalized space). For each fragment, you multiply the alpha by 1/num lines at that UV. That would effectively give you full opacity at each pixel with an equal proportion supplied by each layer. Again, no idea if itā€™s do-able, though.

Donā€™t worry. This dialogue is much appreciated! I just looked into back buffer copies, and it seems like a really useful tool. However, as @jesusemora mentioned, Iā€™m still limited by render order. If Iā€™m understanding things correctly, the back buffer copy gets the entirety of the viewport that has been rendered so far. This still doesnā€™t isolate the graphics associated with each node.

As for manually extracting the fragment data for each line, I canā€™t think of a way to do this either. My Line2D objects have both a gradient and a width curve, and I assume thereā€™s no easy way to reverse engineer the output.

Again, thank you for bringing all this up, even if it doesnā€™t seem like a solution just yet. Iā€™m still learning a lot through this process!

Iā€™ve been mulling this over (more interesting than my job today) and I think I might have a plan of attack. Itā€™s hacky and unproven, but here I go!

First, each line 2D is the child of a back buffer copy set to the rect of the screen/viewport. I think this will give you the line as a texture on transparency that you can work with in the shader. It has the advantage of every line 2D being normalized to the same size. (i.e. a given UV is in the same location on every line)

Next, you set up what Iā€™m going to call your density map. This is a texture that stores how many lines are present at a given pixel, and itā€™s where this idea gets a a bit silly. You create a viewport with the same pixel dimensions as your screen space (or whatever you defined in your back buffer copy) and for every line in your project you create a corresponding line in the viewport. Basically you are mirroring everything from your main scene here, so itā€™ll need to update accordingly.

You set the color of all of those lines to a very dark gray, something like Color(1/256,1/256,1/256) and give them an ā€œaddā€ canvas item shader. This will result in a texture that gets lighter where there are more lines and darker where there are fewer. If there are no lines the color should be (0,0,0) if there is one line (1/256, 1/256, 1/256), two lines would be (2/256, 2/256, 2/256) and so on.

Alright, you send that viewport texture to a uniform sampler 2D on each line in your real scene. You can do this via script using material.set_shader_parameter(). Not sure if you have to send it each frame, and not sure if tree order will affect thisā€¦ but now you have the data you need.

Each back buffer copy will need to use the ā€œaddā€ render mode. Each fragment will look something like this:

void fragment() {
	COLOR.a = 1.0/(texture(density_map, UV).a*256.0);
}

Which should drop the alpha to (1/number of lines found at a given pixel) then itā€™s adding them all together. Functionally you are averagingā€¦ just reversing the order of operations: dividing first then finding the sum.

Ooft. That was an essay! and a fun thought experiment. Seems like a lot of work, I wish there were more render modes!!! Again, totes unproven, but might work ĀÆ_(惄)_/ĀÆ.

PS. Because you are using the add mode in the shader, too, you might want to build all your real lines in a viewport. Then you can lay the whole viewport over your background without having to worry about it being weird with the render mode.

1 Like

Well, it took longer than I was hoping, but I finally finished a proof of concept that achieves the desired effect! Thanks so much @ThreadAndSandpaper for your last post. It was a huge help in finally figuring out how to do this. Encoding information in viewports helped immensely, and I actually didnā€™t even need to perform any back-buffer reads beyond the final pass with the CanvasGroup to ensure that transparency was properly preserved. You can find the completed project here:

All in all, this was a BIG learning experience for me. I hit more roadblocks than Iā€™d like to recount while putting this all together, so Iā€™ll just give a couple of the highlights here:

  • Information that needs to be altered by and passed between shaders can be encoded in a SubViewport and retrieved using a uniform sampler2d. This still feels a bit hacky to me, and I donā€™t have a great idea of what the performance implications are for such a method, but hey, it works! Iā€™ve actually started a separate thread to discuss this further.
  • CanvasGroups are a great way to separate a textureā€™s render mode from the rest of the canvas. This is how I was able to use additive blending for the main computations but not in the end result. (I.e., the result can be rendered on a white background without getting completely washed out.)

Iā€™m marking the thread as solved for now, but if I think of any ways to refine/simplify the solution Iā€™ll post about it here again.

1 Like

Holy cow! Iā€™m so glad it worked! Great job seeing it through. Iā€™m sure it was full of unexpected complications, but Iā€™m so glad you got there <3

1 Like