I might have the exact contents of the steps wrong, you get the idea though
I came across this article today on âTrendslopâ. I thought this section in particular stood out as pertinent to this conversation:
The âtrendslopâ tendencies of LLMs are a result of biases they take on when the models are being trained, researchers noted. Because LLMs are trained on heaps of information from internet texts to social media to news, they tend to cling to the positive or negative connotations attached to certain phrases or concepts, deeming âcommoditizationâ as outdated and negative, and âaugmentationâ as progressive and positive.
In other words, when prompted to provide guidance on a tricky workplace scenario, AI isnât analyzing the situation in question, itâs regurgitating key phrases based on how often it encountered them while being trained on data. In the case of ChatGPT, the study noted, the bot sometimes rejected providing a binary choice, instead recommending both solutions. Research published in Nature last year found AI sycophancy isnât just unproductive, it can be harmful to science, confirming the biases of those prompting it instead of presenting users with data supported from scientific literature or other reliable, more impartial sources.
LLMs provide the most popular answer to a problem, scoured across many platforms. Which is one of the reasons I believe we are seeing the same bad habits in a lot of LLM-generated Godot code. Thereâs lots of bad examples of how to do things, and they in all likelihood outweigh the correct way to do things.
Irrational and proud!