Path tracing fork's future?

Totally agree. Wrote the same before, we are not there, but would love to have it (independent from Nvidia). And this fork is only an experiment, without direct impact on “our” Godot engine. But I think it’s interesting to see. With the DLSS5 horror and how the fork was started it’s a bit concerning.

For the love of God(ot), please keep the replies on-topic, which is primarily path tracing. Discussing AI is not forbidden on this Forum, but off-topic discussions violate the rules.

https://forum.godotengine.org/faq

I have cleaned up this topic leaving only the on-topic replies, as I believe the subject itself is interesting enough to keep the discussion going, if you’re willing to keep it civil and on-topic.

8 Likes

This looks very good indeed.

Real time raytracing or alternatively path tracing has always been a holy grail for real time graphics. Therefore its obviously an attractive option for hobbyists and high end PC users. Maybe it needs a bit of community support before it can truly shine. I think the difference is simply to do with how the screen is processed, ray tracers choose pixels.

I believe the fork has a future (albeit potentially short lived and very niche) especially if usage on AMD hardware has a solution comparable to DLSS 4.5, (perhaps FSR) and could also be used as a proof of concept demo.

Apparently path or ray tracing isnt fully implemented in Vulkan but there are hopes that it will be soon. I dont know exactly what blenders eevee does but its very close to path tracing in quality and also very fast.

Im happy with the results from Voxel GI as a lighting model as i can move lights and the lighting on the player changes when walking through the voxels. It is also stable and fast to compute. The drawbacks are of course the lack of moving objects and the visual boundary.

Theres a lot more to making games than just graphics, so when new engine features appear the fork can either attempt to keep up by merging with newer versions or can simply freeze and wait for someone else to do it. A lot of forks (empirical knowledge, opinions) fail to merge or sync with updates as the more successful original copy twists and performs logical and architectural convolutions that seem to highlight problems with the fork or forks anyway.

3 Likes

People seem to forget that the main problem with path tracing is not light. It’s scene geometry. Traditional approach to realtime rendering is all about optimization and reduction - everything that is not seen is eliminated.

True path tracing is antithetical to that. Everything must be processed, seen or not. Otherwise there will be no stuff for light to bounce off. In a sense, it’s pure brute force.

It needs a completely different way of scene geometry management in hardware, that’d require order of magnitude more computational and storage resources.

5 Likes

But even in Blender cycles light getting so much attention , and transparent materials such as glass make it expensive in Cycles render mode.

1 Like

As far as I’m concerned, raytracing/pathtracing can’t save a mediocre art direction, and a good looking game can have it, but will never need it.

The last Doom won’t start at all on my GTX1070 because hardware raytracing is required. On the other side, FoxEngine (Metal Gear Solid) and RE Engine (Resident Evil) that are way more “technically” impressive games run excellently.

Good art direction can also totally save a game with horrible rendering implementations, most double and triple A games (aka Unreal Engine sloppily implemented) have horrid shadow and hair rendering, I mean look at this mess compared to RE Engine.

Yet games like the pictured Expedition 33 pulls through because the game has an art direction, instead of just trying to do realism.

4 Likes

Unfortunately there are many that will disagree, and they have the backing of major players. Welcome to the future. :sob:

For DLSS5 they get a lot of backlash online.
It’s a genius move by nvidia. Could be the end for AMD and Intel GPUs on PCs. If AMD and/or Intel will come up with some similar genAI solution, they have to use different training data than Nvidia, and the result will always look totally different from the Nvidia rendering. Nvidia will never give their training data to AMD or Intel to get the same results, to build a standard for genAI enhancement.
I hope the tech will never get accepted by the users.

DLSS is precisely the result of this mindless “moar realism pls tnx” attitude. Producing functional path tracing hardware would require years of architecture development. At this point it’s a no-brainer for nvidia et al. Why would they burn money on that when they can profit by simply running some readily available realtime generative post processing on top of the conventional output and say “look how realistic it is, exactly what you have been asking for y’all… What? It pushes the look even deeper into the uncanny valley? Well who cares about that when reflections and bounce light are oh so photo-realistic… at least within a still frame.” :smiley:

4 Likes

Stupid thing is, is once we get to photo realism there is nowhere left to go anyway (graphically speaking).

All this does is accelerate that endgame.

1 Like

Going back to the original topic, I think path tracing’s future is a dead end. If you watch the video in @conz3d 's link, they show a game that was enhanced by DLSS5 that didn’t have path tracing. And It’s pretty clear that for Nvidia, it makes sense for them to try and outperform other hardware vendors. That’s how they put 3dfx out of business in 2000 (and then bought them). That time, they bet on hardware acceleration over software acceleration. Now they’re betting on AI acceleration over relying on developers to implement things. It’s potentially a way for Nvidia to dominate the market again.

1 Like

I think the better approach is to recognize that the light flux stays the same when only the camera moves, theres no need to recompute all the shadows and bounces until the light moves … this is what a lot of the rasterizing algorithms do anyway, shadow maps and voxel GI being the obvious examples. The surfaces that are illuminated already get recomputed for each camera angle with ambient + diffuse + specular, where the GI and skylight will be in the ambient term, diffuse will be a BDRF, a shape function based on view and light rays, specular is raycast anyway.

The problem is that the light flux (this way) isnt adaptive to different view points and the level of detail stays the same, the number of rays per unit volume stays the same.

So the problem is how do you calculate the difference between a sufficiently high detail shadow map that satisfies the level of detail close up, and a raycasted calculation of the same thing.

On the other hand, i wonder if there is a better view dependant algorithm.

That’s not path tracing then. BDRFs are by definition view dependent.

Yeah i added that as part of the point that rasterizers are already taking advantage of the fact that light doesnt need to be recomputed as shadow maps until it moves, but BDRFs and specular lighting already compute rays as the bounce between light and camera, so yes a bit inconsistent but theres not much difference between ray casting and rasterizing for very diffuse surfaces.

Perhaps Nvidia labs have done tests on very high definition shadow maps and they have found that theres so many compute cores utlized that they can just as easily racast with a modern algorithm.

Well that’s the thing, with proper path tracing which results in that coveted “photo realism” - all those concepts don’t apply. There’s no specular or shadows or shadow maps etc. There’s only a bunch of BDRF surfaces and an insane amount of rays. That’s all there is to it. The complete history of rendering optimization/algorithms becomes void. Path tracing doesn’t care for any of it. It’s the ultimate bruteforce approach. Rendering with it is extremely simple but also extremely expensive. You just need a lot of ram to store the whole scene geometry and a lot of compute power to scatter the rays on it or into it if you want to go volumetric. Easier said than done in real time though.

1 Like

Well maybe… however the texel density less than 1m in front of the camera is much higher than than 20m in front … the fall off ideally works with the inverse square law or something similar, but in practice the renderer just swaps to a different mip map.

So if the rays are just cast for each texel, then the shadowing calculation is greater close up, and lower further back. So what i am saying is they at Nvidia might have measured performance with 16k shadowmaps from multuple light sources and found they get more bang per buck for ray casting.

And perhaps not, but they do seem to offer good FPS with raytraced/cast shadows on high end cards.

I dont know how they manage the scene in vram but maybe they have a good way of clipping objects to the light frustrum to gain faster access in the render…

Anyway i doubt the fork is going to keep up to date with the master branch. Its a bit of a gimmic and something for people to use for learning the tech. I saw in one of the videos that the ray casting also switches off volumetric fog, GI, SSAO etc so pretty much ruins the pipeline.

Also I doubt RT will be built into the master branch anytime soon because theres a lot of graphics card features not built into the engine anyway, like Tessellation and Geometry shaders.

2 Likes

Yeah it’s totally a gimmick. I’m not interested at all to go play with it, but some people might.

A new demo just dropped, based on the Team Fortress 2 map.

Looks impressive, especially around the middle of the video, where they move the light source around the building. Definitely much better than what GamesFromScratch presented…

There’s a little more information and discussion with OP on Reddit, if anyone’s interested.
https://www.reddit.com/r/godot/s/YDE7kCGIzB

3 Likes