Thoughts on answering/fixing AI LLM-created garbage code?

Without AI I wouldn’t progress on the development of my current game even remotely. It would literally take me 10x more time to figure out how to do this or that on my own. Even though I went through tutorials and carefully perused much of the documentation.

Using AI code blindly is a bad thing. Feeding AI with bit-sized requests for a function or even just an outline/suggestion on how to implement certain thing is completely another (and then implementing it on your own).

The value of AI isn’t that it gives you ready-made code although for routine tasks/functions this alone is invaluable. The true value is that it acts as the other side of a conversation (think the Socrates method) making YOU formulate questions, bring about hypotheses and systematize the knowledge of your own game. Your brain doesn’t care if it’s a real, human conversation party or a fake AI one - it gets stimulated the same way in both cases.

It’s amazing how many people miss this aspect of the AI technology only viewing it as a vending machine that spits out finished solutions.

1 Like

In programming, using the socratic method with an inanimate object is called rubber ducking.

Whether you use a rubber duck, AI, or another person - you have to understand your subject.

In this thread we are not talking about the uses of AI (there are literally half a dozen of those on the forum that are quite long). We are discussing how to approach helping (or not) people who clearly did not do what you did. In that they just started learning from AI, and kept adding code with AI until they had an overcomplicated mess and then asked for help.

2 Likes

[quote=“dragonforge-dev, post:53, topic:132109”]
In programming, using the socratic method with an inanimate object is called [rubber ducking](https://medium.com/@katiebrouwers/why-rubber-ducking-is-one-of-your-greatest-resources-as-a-developer-99ac0ee5b70a).

Whether you use a rubber duck, AI, or another person - you have to understand your subject.
[/quote]

A rubber duck is fundamentally different from AI because a rubber duck won’t tell you - listen, you can use such-and-such classes/props/methods for this task far more effectively than you do it now. It’s new knowledge, not just the knowledge you shuffle in your own head when you ‘talk’ to a rubber duck.

Does it make any difference if it’s AI who gives you a solution or a human? The truth is truth whoever voices it.

Are you familiar with the term and how it’s used in programming? You can, in-fact rubber duck with a human who knows nothing, a human who knows something, or an LLM. There’s a reason I bothered to link an article about it.

Ah, and here we come to the problem that we are here to address in this post. People believe that LLMs, glorified word guessing machines, cannot lie to them and always tell the truth.

They come here frustrated because the method that the LLM swore to them would work doesn’t exist. They come because the solution the LLM told them would solve their problem is overcomplicated and uses 20 lines of code when two or three will do the same thing. They come because the LLM has given them a huge hairball of spaghetti code and get angry and defensive when you tell them that their code doesn’t make sense. They come because the LLM Dunning-Kruger Machine made them think that they know what they are doing, but they cannot explain what’s wrong with their code or pinpoint the problem - only say “it doesn’t work like I expect it to work.”

The truth is, LLMs do not know what truth is. They just pattern match and are really good at it. If the solution works, no it does not matter where you got it. But be careful of accusing a spicy autocomplete machine of being truthful, or lying, or hallucinating. That’s anthropomorphizing a fancy database that has a high accuracy rate for returning information.

3 Likes

I’ve been reading some more on the effects of AI on the human brain. This second article about the Dunning-Kruger effect of AI also mentions the term “cognitive offloading”. Where you expect the AI to do the thinking for you, and some to the correct conclusion on its own with whatever limited information you give it.

I believe this is where my frustration is coming from. The posts where people treat us humans as tools for cognitive offloading.

Also apparently a study of the brain on LLMs measured by EEG shows that the more you rely on LLMs for cognitive offloading, the more your brain activity atrophies. And there are concerns of addiction to LLMs as well.

Fun stuff.

EDIT: Also, it looks like GitHub is pondering a related question to this post.

3 Likes

Where did I say that LLMs always tell the truth?

I’m saying that whether a statement is truthful or not doesn’t depend on who (or what) says it.

Here “truth” means code that performs a specific task reasonably well. You ask LLM to come up with a specific function, then you test it, and it just works. And you understand that you would spend hours to write the same or a better “oh so a human code”.

Only result is what matters. No one cares how much time and sweat you spent on your code. Your customers (players) do not care how much time you spent writing your code. You won’t impress them with a label “Proudly written by a human 100% of the time”.

This is unfortunately the issue with using AI, people don’t actually learn from the experience.

Its good in that it might spark some interest in programming for someone who doesn’t have any experience. However its the wrong way to learn.

Call me old (and I am), but the way you learn things in life is to get stuck in and try it yourself from scratch. The more you make mistakes the more you remember, the better you get.

AI gives you none of that.

5 Likes

Where did I say you said that? That’s a pretty big reduction of two paragraphs of prose.

Agreed.

Agree to disagree. I think that truth always means that something is true.

You clearly have not read this whole thread. As you seem to be arguing with me by saying things that I have said previously in this thread.

Again from above:

I think you would be enlightened by reading this article published two days ago: AI is causing a problem for game developers that nobody expected. The TLDR on the article is that companies are worried now about distancing themselves from anything that looks like it might be AI - redoing art even. Because even the accusation can kill a small game.

Players of games will try to cancel games, and have put studios out of business for using AI. Clair Obscur 33 lost Game of the Year awards for having some AI-generated concept art. Larian Studios (Baldur’s Gate 3) has been taking a lot of heat for saying that they experimented with AI.

3 Likes

You might not “impress”, but having the opposite will very much turn a LOT of players away from you game, and some might even openly tell others that your game is crap because it used AI in any capacity.

3 Likes

How did you learn to code?

Way back (Apple II days) I learned by getting magazines, typing in the code from the magazine and having a program/game/etc. Did I understand most of it? Not yet, but after doing this over and over I learned. Last count I’ve coded in over 30 different languages in my several decades of writing programs.

This was many years before youtube or even a web with tutorials.

Now we have AI/LLMs. Personally I am not a fan as I “grew up” learning coding and not having it handed to me. I had to learn how to debug and find errors. IDEs didn’t highlight mis-spellings or other syntax errors. This was all done by hand.

So (almost) 45 years later from when I started it’s a whole different landscape for developers. Sure I can stick to my “get off my lawn” attitude and ignore or tell others they’re learning wrong but is that really helping? If they are starting with an AI/LLM as I started with a magazine and can build off of that then what’s the harm? (ignoring the environmental costs of AI)

Perhaps by pointing out where the issues are, gentlely suggesting to try on their own to get better and otherwise reasonably assisting, these people will continue on and develop their own skills like many of us have. You have to start somewhere and can they be faulted for using what is currently offered and available?

1 Like

I was given textbooks and taught by family using a TRS 80. But I had a similar experience as I started learning 40+ years ago. Original Mac and Apple IIc mainly for me. For the Apple IIc, you had to type everything in using BASIC and if you had a typo, you had to start over because you were typing it directly into RAM. I enjoyed hacking games more.

Not trying to fault them. Just tryin to figure out how to keep things welcoming without being used for cognitive offloading, and instead actually be teaching.

I appreciate your thoughts.

2 Likes

So why don’t you write games using Assembly then? That’s a surefire way to avoid any “crap” added by numerous layers of “helpers” (including game engines like Godot by the way).

It’s easy. Open a Notepad and write:

section .data
    msg db 'Hello, World!', 0Ah
    len equ $ - msg

section .text
    global _start

and so on. I’m sure you’ll like it - and your players will praise the game and even tell others how hard you toiled for 20 years to write a simple platformer :slight_smile:

1 Like

You’re very ignorant of the current state of the world.

3 Likes

“Something is true” doesn’t make any sense unless we discuss something specific. In this CONTEXT the truth means that the code written by AI AND carefully reviewed by the programmer AND performing its task well to the point that there are no immediate bugs (using the same testing procedures that the programmer would use for their own code) is GOOD ENOUGH to be used in the game.

No, I am not a proponent of AI-generated images/visuals. And no, I’m not a proponent of blindly copying the LLM-generated code under any circumstances. But I DO think that AI is a huge boon and in fact is a smart “documentation assistant” that helps clarify a lot of things very fast which previously just required a lot of googling, asking questions etc. etc.

Clear enough?

I suppose this argy bargy comes from an erroneous conflation of the following:

  1. What AI actually can do well
  2. What people think AI can do
  3. What people actually do with AI

These are all very different things.

AI is an incredible piece of engineering and software dev that is truly amazing! I personally love it and use it every day. Like I used to use Google every day, but now have very little need to ever step on that once beloved search engine and that in itself delights me.

However, it is not going to take all our jobs, it is not going to replace the need for coders and it is not going to take over the world, and it is not going to answer all of human needs in one go. It is not without risks, it is not cheap, and it is not something I pay for directly, at least not yet.

As for what people do with it, they create slop, destroy once social sites with sloppy automated responses, they churn out slop wherever there might be an influence to push, or a penny or two to be made. They use it to try to peddle nonsense, to abuse systems, to cheat, to take unmerited short cuts, to produce pornography, to damage reputations, to create fake news and to deceive. CEO’s want to use it to cut wage bills, to reduce their head counts and what exactly is new there? Enshitification will ensure that call centers will get worse, help screens will get worse, even AI will soon be embedding ads and more ads into not just its screens, but its suggestions and output will come soon too.

As for what that all means for game dev and this forum. The cascade of trolling, abusive and hopelessly lost vibe coders will continue to grow. Game dev will suffer like every tech industry will, bearing the weight of AI slop and the attitude from illiterate fools that ‘why don’t you just get AI to do it’.

AI is an amazing tool. So was the AK47. Use it while you can, enshitification has already started and the true cost of all those data centers is going to be paid for one way or another. And don’t expect the greedy rich to stop being greedy or rich, just because the AI grifters promise you the world.

3 Likes

It’s because unlike you we’re not blind to the fact people are losing their jobs to AI slop, in gamedev, TV and movie post-production, editing and VFX, being offered half their previous salary to supervise the mixed AI teams and dismantle their industry and its artisans.
Saying hi to any agent costs dollars not cents.
And let’s not forget the exploitation through enshittified data labelled jobs without which LLMs can’t make statistical choices.
The field of AI, and whatever is marketed as such is technically very interesting.
Been coding for 40+ years like a few others here, the nerdy geek in me is fascinated by it all. But the citizen, a term that implies duties and not just rights, is worried about its impact not just on developing economies but on the global stage.

Its consequences on a world stretched thin already are mind boggling.

Cheers, have a nice day

5 Likes

I did love me some Assembly back in the day. Your code is missing a few things, like the ability to run. Claude.ai suggests this:

section .data
	msg db 'Hello, World!', 0Ah
	len equ $ - msg

section .text
	global _start

_start:
	; Write message to stdout
	mov rax, 1          ; sys_write syscall
	mov rdi, 1          ; stdout file descriptor
	mov rsi, msg        ; pointer to message
	mov rdx, len        ; message length
	syscall

	; Exit program
	mov rax, 60         ; sys_exit syscall
	xor rdi, rdi        ; exit code 0
	syscall

Its optimized for a modern x86-64 architecture. It also includes a _start function so you can actually run it. Of course, that’s a Linux implementation. A Windows one is going to be completely different.

Again, agree to disagree. I’m fine with you clarifying what you meant, but that context definition is based on the assumption that we can read your mind and therefore know what you meant by the single word “truth” in your initial post.

Well, sure. But per my previous post which you seem to have ignored, we agree on this point. And this thread was not created to talk about that usage of AI.

So I noticed that other than a couple of questions you’ve posted, this is the only thread you’ve engaged with since you joined us last year. Sure, we disagree, and that’s ok. We try to avoid ad hominem attacks here.

I don’t want to speak for @tibaverus but his comment may be in reference to the fact that there is a lack of truth in the statement: You won’t impress them with a label “Proudly written by a human 100% of the time”. In fact, I posted an article about this in my response to your statement trying to give you some additional context. I am starting to believe you are more interested in trolling than actually engaging in discussion.

2 Likes

The real problems start when you need to assemble two dozen such functions into an architecture.

5 Likes

Upper management certainly thinks it will…

And newbies/juniors entering the field think it puts them on par with experienced developers.

I have given it a reasonable try and only once did I ever get a chunk of code that just worked. All the rest needed substantial work and at the end of the day I just tossed it and wrote the routines/processes/functions by hand as it was just as quick if not even quicker. Worst was it gave me some code for one part, some code for another part and the the core routine gave me the answer of “oops, something went wrong”. That was a good laugh and also the final straw for me even attempting to use it any more.

But management doesn’t know or experience this, they just hear/read “AI is faster” and “AI let’s you do more with less employees”.

So you’re right, it’s not going to take ALL of our jobs, but it will take some and arguably with what is left there will be an inferior end result.

2 Likes

^^^^^^^ Yes, This ^^^^^^^

4 Likes