Hi all. I’m working on recreating a piece of video art from the late 70s in VR with Godot. I have a simplified version of the project here for your perusal.
Here you can see a screenshot from this project.
[upl-image-preview url=https://godotforums.org/assets/files/2024-01-23/1706014119-688281-image.png]
Because I need to target standalone visors, I figured all the fancy GI techniques are off-limits. I’m also testing on my phone (a Pixel 7) for good measure. I have yet to receive my dev headset so I don’t know what it’ll be, and so I have to aim for the greatest common feature set, also for ease of deployment.
My approach to fake GI is to place a very wide spotlight in front of each “screen” (I have three in the actual piece), do a 5-point sampling and average of the texture, and use this average colour as the light’s colour. The final effect is good enough when you have three screens on three of the four walls, so I’m happy with this.
What I’m not happy with is the performance. When testing on my laptop (16" MBP M1 Max), I get a very commendable 250 fps, but if I turn off the sampling, I shoot up to around 1200 (vsync off). When testing on my phone, fake GI off yields about 75 fps, but fake GI on drops down to about 20 fps. If you look at how I do the sampling, what I do is pull the texture from the GPU’s memory into the CPU’s memory, do the sampling on the CPU, and assign the colour to the light. It’s been pointed out to me that this is Very Bad Indeed™ – I did suspect so but hey, it’s worth being told off sometimes.
One thing I tried is to throttle the sampling using a looping Timer, and that does make things slightly better (I get about 50 fps on the phone) but the stutter makes this approach undesirable. You can try the effect yourself by pressing t
while running the project, or tapping the screen of your phone.
I’ve received a number of suggestions, including
- Using shaders: I’m not sure how to use shaders for this. I’m only familiar with visual shaders and I’m not sure how compute shaders work. My feeling is that I’d need a shader that takes in the texture, does the sample-and-average, and spits out a single colour value. I’m barely familiar with how fragment shaders work and so I can’t see this as a sensible approach, but maybe other types of shaders could help here? Are they even supported on Android?
- ray casting to get the colour: I’m not sure how this would work. I assume this means using a fragment shader on the viewport to do the casting, but this wouldn’t be a good idea because I would lose access to the screens when they move out of view. Alternatively, this would need to be done in the CPU entirely, casting rays from somewhere in the room and sampling the surface colours of the “screens”. I can’t see how much of a performance advantage would this give me but, if that avoids grabbing the entire texture, it could potentially work. I have never done ray casting, though, so I’d still be dead in the water.
- using reflection probes: unless they have an API to give me an average colour, this would still require me grabbing the probe’s texture, right?
- using mipmaps: this would make sense if these were static pictures (at which point I could just bake the GI and be done with it) but they are videos so I’m guessing no mipmaps are generated.
The mipmaps suggestion gave me an idea: create a severely scaled down version of each video, load that one alongside the high res version, and use it to do the sampling on a much smaller texture. I haven’t tried this but, assuming the cost of taking a, say, 16x9 image as opposed to a 1920x1080 image from the GPU is lower, I’d then have to trust the two VideoStreamPlayer
s to stay in sync for long enough.
I may be approaching this entirely wrong, so I’m very open to any suggestions.
EDIT The scaled down video idea does make things a bit better but I’m still at around 45 fps (up from 20) with a single video, and I can’t imagine that three would be any better. The timer throttling gets me to 60 fps on mobile with a 1/10 s interval and one video so that could potentially work.
EDIT 2 With three videos and the pre-scaled down duplicates I can get up to about 50 fps on my phone, and around 360 on my laptop. I just have to hope that a VR headset has better hardware than my phone or I won’t be hitting 90 any time soon with this approach.