Well, it took longer than I was hoping, but I finally finished a proof of concept that achieves the desired effect! Thanks so much @ThreadAndSandpaper for your last post. It was a huge help in finally figuring out how to do this. Encoding information in viewports helped immensely, and I actually didn’t even need to perform any back-buffer reads beyond the final pass with the CanvasGroup to ensure that transparency was properly preserved. You can find the completed project here:
All in all, this was a BIG learning experience for me. I hit more roadblocks than I’d like to recount while putting this all together, so I’ll just give a couple of the highlights here:
- Information that needs to be altered by and passed between shaders can be encoded in a SubViewport and retrieved using a uniform sampler2d. This still feels a bit hacky to me, and I don’t have a great idea of what the performance implications are for such a method, but hey, it works! I’ve actually started a separate thread to discuss this further.
- CanvasGroups are a great way to separate a texture’s render mode from the rest of the canvas. This is how I was able to use additive blending for the main computations but not in the end result. (I.e., the result can be rendered on a white background without getting completely washed out.)
I’m marking the thread as solved for now, but if I think of any ways to refine/simplify the solution I’ll post about it here again.