I’m trying to simulate diffraction in 2D. To do that I’m using RayCast2D and an Area2D containing a CollisionPolygon2D. Rays and polygon intersection work well almost everywhere but not near vertices. Is this due to some numerical precision issue ? Or a convex decomposition one ? Tried a custom build with double precision enabled but I have the same issue. Knowing the scale factor of the viewport is around 1500 in the screenshots, so quite a big zoom factor but nowhere near double precision issues I would say ?
I’m seeing collision margin for one way collision only, but one way collision is not activated in my case.
Also I’m now noticing small polygons do not intersect with rays.
I’m new to godot BTW so maybe there are hidden obvious options I’m not thinking about but I’m seeing nowhere limitation with shapes size or stuff you should take care of near the vertices.
The image might show a graphical glitch because that scale is not the expected use-case for 2d graphics. You would probably scale the content instead.
Can you use the debug output and print results for collision tests stepping from the correct zone into the problem zone … something like “Ray 1: Near Vertex: Result=…”
Ok so it gets weirder and weirder
The curve I’m drawing is a sine wave with a wavelength of the order of 10^-3, so, small, but not crazy small and definitively something 32 bits floats can handle. When I’m zooming on the curve, first thing I notice, is that it feels I’ve reached numerical precision which is weird considering the scale of the curve. Plus the shape of the curve changes depending on the zoom factor (see screenshot showing same curve with a scale factor difference of only 1.1. I’ve checked with python code and even using 16 bits precision float should be better than that. Is that some LOD effects ?
Also another question related to that fact I’m using a custom build with double precision. What is now using double precision ? I’ve seen Vectors are now 64 bits for example. But are the internal computations of a RayCast2D also 64 bits, independently from graphical concerns ? And is there a way in general to decouple the fact that my simulation is 64bits and then sure, there is a precision tradeoff to do when I want to display stuff. (I mean without having to recode myself all the ray to something intersection calculations)
It’s one side of a rectangle stored in a MeshInstance2D with a PRIMITIVE_LINE_STRIP surface. The bottom side of the hollow rectangle on the screenshot. Rectangle dimensions are 250x50. This bottom side of course looks like flat from a distance, but has this fine sine wave structure which wavelength is 0.004. If I scale everything by 10, so a rectangle of 2500x500 and a sine of wavelength 0.04 and zoom again on the sine, then everything looks identical to the 1:1 scale !
I’ve exported the points I’m generating within godot to plot them with an external tool and the sine looks fine. Also to recreate this kind of jagginess I have to trunc every number to the 4th decimal. This is what you can see on the screenshot: Blue curve is made of points exported from godot, green curve is made of the same points but truncated to the 4th decimal.
I don’t think it’s a good idea to do this type of precise simulations in a game engine. Game engines are optimized for performance, not for accuracy. Collision solvers do it numerically in most cases instead of analytically, there are safety margins etc…
For collisions, you’d want to set all your margins to 0 and increase solver iteration steps, and other solver parameters in project settings.
As for drawing, that concave polygonal shape is tessellated into triangles. The tessellation algorithm may have introduced its own imprecision.
True I know that. That’s why my collision geometry is a simple rectangle and then I’m trying to do some trick to refine the intersection with the fine structure. But I wanted to be able to have to correct geometry to show people i’m involved with what is really happening in what we are trying to do (namely spectroscopy for astronomy)