Godot Alpha Sorting Effect

Godot version: 4.4.1 (stable)

I am trying to achieve a specific effect involving how transparent textures are drawn over each other based on their alpha values. Specifically, I want sprites with higher opacity to completely overwrite lower-opacity ones in areas where they overlap, rather than blending.

In this image the regular behavior is shown, five circles with different alpha values are drawn. When they overlap, the result is standard alpha blending — the in-between areas blend smoothly.

In this second image MY intended behavior is shown, circles with higher opacity values overwrite the ones with lower opacity in the overlapping regions. There is no blending, only the topmost visible pixel remains.

This might seem counterintuitive to typical transparency rendering, but it is the intended effect I want to create. I have tried multiple methods. I looked into CanvasGroup, but it only helps when all alpha values are the same. I attempted a system using a SubViewport where I render each sprite in order from highest to lowest alpha. I also tried writing a shader that samples the previous texture and only draws pixels where the alpha of the previous layer is zero.

Despite trying these approaches, I could not get the full effect working. Either the masking fails or blending still occurs in some form. So, is there a correct way to implement this effect in Godot 4.x?

Ideally, I want a system where I can render multiple sprites into a SubViewport or similar, and have each sprite draw only on pixels that have not already been written to by a higher-opacity sprite. If this requires a specific shader or rendering technique, I would appreciate a clear explanation for example.

Please do NOT just link to documentation or another post without explanation I have already read much of the documentation, and context is what I’m missing.

Thank you, and sorry if this is a lot of details lol.

Have you tried creating an image and using blit_rect()?

Something like…

var texture1: Texture2D
var texture2: Texture2D

var image1 = texture1.get_image()
var image2 = texture2.get_image()

var width = texture1.get_width() + texture2.get_width()
var height = texture1.get_height()

var image: Image = Image.create_empty(width, height, false, Image.FORMAT_RGBA8)

image.blit_rect(image1, Rect2(0.0, 0.0, texture1.get_width(), texture1.get_height()), Vector2.ZERO)

texture2_position = Vector2(texture1.get_width() - 20.0, 0.0)
image.blit_rect(image2, Rect2(0.0, 0.0, texture2.get_width(), texture2.get_height()), texture2_position)

var new_texture = ImageTexture.create_from_image(image)

Hi, I tested this out and it kind of just cuts off the entire area of the 2nd texture which is unfortunately not what I am looking for. The intended result should respect the actual “non-empty” space of the texture. Presumably these textures would start out in something that can display it first like a Sprite2D so that their alpha value can be edited.

Is this 2D or 3D?

If your intent is 2D, I’d suggest you actually do the draw in 3D, use an orthographic projection for your camera (that is, things don’t get smaller with distance), and draw each sprite at a depth based on its alpha value, with more opaque things being closer to the camera.

This technique won’t work very well if there’s a lot of alpha variation within an individual sprite (like, say, an alpha gradient across the body of the sprite), but for sprites where the alpha is fairly uniform and is either some value, transparent, or maybe some fuzzing at the edges, it ought to do what you want.

Failing that, I think you’re into writing shaders and possibly trying to see if you can get a stencil buffer.

1 Like

That would be blend_rect(). The problem being that it’s going to apply the alpha values of both I believe. Maybe? It might not actually, it may only apply the blend values of the last one. It’s what I use to create large images from smaller ones. I just assumed you’d tried it and it didn’t work.

var texture1: Texture2D
var texture2: Texture2D

var image1 = texture1.get_image()
var image2 = texture2.get_image()

var width = texture1.get_width() + texture2.get_width()
var height = texture1.get_height()

var image: Image = Image.create_empty(width, height, false, Image.FORMAT_RGBA8)

image.blend_rect(image1, Rect2(0.0, 0.0, texture1.get_width(), texture1.get_height()), Vector2.ZERO)

texture2_position = Vector2(texture1.get_width() - 20.0, 0.0)
image.blend_rect(image2, Rect2(0.0, 0.0, texture2.get_width(), texture2.get_height()), texture2_position)

var new_texture = ImageTexture.create_from_image(image)

But have you read this?

Take a look at behind the scenes portion where multiple nodes use the same screen buffer but the render order of the nodes dictates the result of the draw calls.

The draw back is you have multiple nodes to deal with in a scene.

Another approach is to use a shader with an algorithm to draw a filled circle, or use a texture of a circle to sample, and just draw it as many times as you need with offsets, starting with the least alpha first. The final color will be whoever edited the fragment last.

3 Likes

yes, this is strictly 2D (I guess I should have mentioned that). I appreciate the 3D suggestion though; it’s a workaround I hadn’t considered. Mostly because I’ve never touched 3D (lol). I was hoping to keep it in a pure 2D pipeline for simplicity and performance — ideally with a shader or image-based method that lets me layer transparent textures without the blending. But I will attempt your suggestion and let you know if I get anywhere. Thx.

I did give your code a try and the result was the image I posted before. I think the issue I’m running into is that it still applies blending in a traditional way still, where both images influence the final result.

What I’m trying to achieve is more like a “draw this second image, but only in pixels that weren’t touched by the first” — so instead of layering opacities, the opaquer sprite just overwrites weaker ones entirely in the output. If blend_rect() only applies the latest image’s blend values, then that might actually help, but from what I could tell it didn’t seem to work. Thx though.

1 Like

With an ortho camera you can kind of treat 3D as just 2D + depth.

Yes, I have read it before and from the example images, it is unclear if two textures of different opacities would produce the same effect (though I’m assuming it’s no different). So, the solution is probably in here. I am just unsure of how to incorporate it as I am fairly new to this. But I will spend more time with it and see what I can do. Thx.

If anyone is curious, I think I have the solution it was a lot simpler than I expected (though it may be inflexible for now). You can set up a simple scene that displays some sprites and order them to your liking. And simply apply this shader I made to them.

shader_type canvas_item;

uniform sampler2D screen_texture : hint_screen_texture;
uniform float alpha_255 : hint_range(0, 255, 1) = 255.0;

void fragment() {
    vec4 screen_col = texture(screen_texture, SCREEN_UV);
    vec4 sprite_col = texture(TEXTURE, UV);
    
    float alpha = alpha_255 / 255.0;

    // Skip fragment if sprite's alpha is nearly zero (empty-space)
    if (abs(sprite_col.a) < 0.001) {
        discard;
    }

    // Blend visible parts only
    COLOR = mix(screen_col, sprite_col, alpha);
}

You can see in this image the transparent “Godots” (or whatever you call the little guy) do not blend with one another despite being transparent indicated by the red line behind them all showing that they are still technically see through.

When you use hint_screen_texture, each sprite samples a snapshot of the screen before it is drawn. So if multiple sprites overlap, and they all use this effect, they end up sampling the same unmodified background, not each other. At least that is what I believe is going on still kind new to this.

So, thanks, pennyloafers for suggesting the documentation (despite me objecting to it, ironic) I guess I just needed some direction. :slight_smile:

4 Likes

here is an alternative without the screen texture, the only draw back is it could be tedious to write many trails and you need to expand the region of the sprite. (could probably build a loop for the number of repeats)

I had some issues with custom regions bleeding textures with opaque pixels on the edge of the texture. so I made a circle with transparent parts on all edges.

2 Likes

Hey @Mooksamill207,
If you are okay with using 3D, I think I might have a better solution for you. Which does not require the usage of a screen_texture and instead takes advantage of depth sorting in 3D. It will require a viewport if you want to render to Sprite2D. During my test, I have used an orthogonal camera. The ordering can be controlled via sorting_offset in VisualInstance3D and depth in GeometryInstance3D>Instance Shader Parameters.


Here is the shader code:

shader_type spatial;
render_mode depth_draw_always;

uniform sampler2D _texture : source_color;
uniform float _alphaCutoff;

instance uniform float alpha = 0.5;
instance uniform float depth = 0.5;

void fragment() {
	vec4 color = texture(_texture, UV);
	ALBEDO = color.rgb;
	ALPHA = alpha * color.a;
	DEPTH = depth;
	if(color.a <= _alphaCutoff) discard;
}

Hope this helps!

Hello! I believe someone also suggested this camera method. Is there a specific reason why I shouldn’t use screen_texture like in my solution? I’m guessing it’s not great for performance but so far, I haven’t noticed anything. If you could explain why this solution is better, than I might be willing to try it out. As so far, my method seems best fit for my project as its simplicity helps me work with it more.

I suggested it above as a possible method, but if the method you’re using works, I’d say stick with what you have.

The main advantage of it is it’s relatively simple; the ortho cam means everything stays the same size, and the depth buffer handles clipping pixels for you. The only caveat is you need to make sure you draw the nearest (most opaque) things first, so the depth buffer prevents further (less opaque) things from being drawn.

1 Like

I see, you explained it as a 2d camera with depth. Does this mean it would be rather simple to intergrate with a project that has been a using Camera2D? What I mean is, can I replace the main game camera with this one (and can it produce simillar results) or must it only be used for the effect?

You can do everything with an ortho camera, but there are caveats. The main one is that Godot differentiates between 2D and 3D objects, so (for instance) all your Sprite2D would need to become Sprite3D, and you’d need to specify an extra argument for rotation (since you’d need to tell it you want to rotate around the Z axis, whereas that’s implicit for 2D).

If you can live with that, you can do everything in 3D+ortho that you could do in 2D, plus you have the depth buffer to help with layering and compositing.

Ok, thanks for answering I’ll keep this in mind if I get around to trying it.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.