I am displacing a subdivided plane mesh with a NoiseTexture2D as a heightmap. It seems that NoiseTexture2D is limited to 8-bit floats when sampling in a shader. This creates this minecraft-like terracing effect, but I need smooth terrain.
What’s the best way to get high-precision noise? Is there no other option than calculating noise manually in a shader?
shader_type spatial;
// Filter mode does not matter.
global uniform sampler2D height_map: hint_default_black, repeat_disable;
// When amplitude is high, terracing appears.
global uniform float height_map_amplitude;
global uniform vec3 terrain_position;
// LOD stuff (ignore this)
global uniform vec2 terrain_chunk_size;
global uniform int terrain_max_lod;
global uniform int terrain_lod_zero_radius;
// I included LOD code since sampling texels perfectly is important.
float height(vec2 world_position) {
vec2 subdivision_size = terrain_chunk_size / float(1 << terrain_max_lod);
vec2 texel_pos = world_position / subdivision_size;
vec2 uv = (texel_pos + 0.5) / vec2(textureSize(height_map, 0)); // sample texel centers
return textureLod(height_map, uv, 0.0).r * height_map_amplitude;
}
void vertex() {
global_position = (MODEL_MATRIX * vec4(VERTEX, 1.0)).xyz;
// LOD stuff...
VERTEX.y = get_height(global_position.xz);
}
void fragment() {
ALBEDO = vec3(0.8);
}
The heightmap code itself is not as important since all it is consists of sampling the heightmap texture. What is important is that each vertex samples the texel centers. Linear or mipmap filtering on the heightmap would do nothing since I never sample between texels.
I just need infinitely scrolling noise with high precision. Is there a way to do this with NoiseTexture2D or otherwise?
I don’t know what you mean with “upscaling”. The issue is 8-bit precision not being good enough with tall mountains. Using mipmap or linear filtering does nothing in my case.
A blur filter could work (sampling the height from surrounding pixels and averaging), but this would be 4x slower (and not deterministic when scrolling) and ideally the data itself would be precise.
It uses the least video memory when each vertex samples its corresponding texel in the texture. It’s also the most stable when using terrain LODs.
Normals are easy: they can be calculated either by using a copy of the NoiseTexture2D with “as_normal_map” enabled or using central differences from the heightmap. They are not as important here.
I am mostly interested in high-precision noise to avoid the terracing.
Why not? How would that even work? Just use some slop scaling factor to offset the texel grid from the vertices? It will still line up sometimes and moiré effects will appear.
Thanks for the suggestion, but the way the data is sampled is not the issue here. The problem is the heightmap’s precision. I will look into writing my own perlin fbm generation.
Generation is not the problem here. It’s NoiseTexture2D that doesn’t allow for hdr/float pixel format, unlike some other built in textures. You can still manually sample Godot’s noise into a hdr image and create an ImageTexture.
You’re completely right, but I don’t want to rely on filtering bad data when I need deterministic results. I did not mention this but I also need a small collision mesh under the player to line up perfectly with the terrain. Blurring the bad data would propagate issues into the rest of the project.
No, but take a look at the FastNoiseLite reference. It returns a floating point sample in -1, 1 range. You can write such samples into a float pixel format image, and create an ImageTexture from it.
Yeah, the problem lies with sampling through NoiseTexture2D. I wish there was a way to get the data from FastNoiseLite directly to the GPU though. Thanks for helping!
You can’t put data from noise directly to gpu. Even NoiseTexture2D needs to first sample it pixel by pixel on the cpu side. Only that it’s done under the hood in native code. Sampling it from the script is not a big deal, it’s couple of lines of code. You can later delegate it to a worker thread or gdextension if performance becomes a problem.
Alternatively you could implement a generator in a compute shader. This would create the texture directly on gpu side, but I think it’s not worth the effort compared to sampling the built in noise.