If you hate LLMs, there's hope

I might have the exact contents of the steps wrong, you get the idea though

2 Likes

I came across this article today on “Trendslop”. I thought this section in particular stood out as pertinent to this conversation:

The “trendslop” tendencies of LLMs are a result of biases they take on when the models are being trained, researchers noted. Because LLMs are trained on heaps of information from internet texts to social media to news, they tend to cling to the positive or negative connotations attached to certain phrases or concepts, deeming “commoditization” as outdated and negative, and “augmentation” as progressive and positive.

In other words, when prompted to provide guidance on a tricky workplace scenario, AI isn’t analyzing the situation in question, it’s regurgitating key phrases based on how often it encountered them while being trained on data. In the case of ChatGPT, the study noted, the bot sometimes rejected providing a binary choice, instead recommending both solutions. Research published in Nature last year found AI sycophancy isn’t just unproductive, it can be harmful to science, confirming the biases of those prompting it instead of presenting users with data supported from scientific literature or other reliable, more impartial sources.

LLMs provide the most popular answer to a problem, scoured across many platforms. Which is one of the reasons I believe we are seeing the same bad habits in a lot of LLM-generated Godot code. There’s lots of bad examples of how to do things, and they in all likelihood outweigh the correct way to do things.

3 Likes

Irrational and proud!

3 Likes