Demotivation using AI

Hey,

This is my first time joining any kind of forum, and the main reason is that I think I’m using generative AI too much to help me solve programming problems,
finding the best fitting pattern, the right algorithm, etc. And it’s demotivating me from making any kind of cool project.

The reason I rely so heavily on generative AI is that I want to find the “best” or “perfect” solution right away.
It often gives me something that feels right at first, but isn’t truly fitting in practice.
Also, getting instant feedback on a first “not-so-good” idea feels rewarding, like a dopamine rush that comes from feeling like I’ve found the solution.

But I know that in the long run, if I want to really learn something, I’ll have to make the effort to dig deep into whatever technology I want to master.

Still, knowing that there’s a tool that can make things ten times faster makes me say, “Why not use it? If everyone else is using it.”
It’s tempting, like a god that gives you seemingly correct answers, but they’re only approximately right.

So I think joining a forum can be beneficial.
I want to be able to discuss the problems I run into instead of just prompting endlessly, and actually come to conclusions on my own,
learning and developing critical thinking skills through conversation.

Do you think critical thinking will become a rare skill in the future? I feel like more and more people just trust whatever AI says.

Do you ever feel like that, like you’re down because you didn’t build the thing yourself?
Do you use AI for your daily programming problems? If so, do you recommend avoiding it entirely? Or maybe setting limits, like asking it not to give code or direct answers?

Leave your thoughts on this. Feels like some kind of AA for generative AI users :’)

2 Likes

Welcome to the forum!

This is a really great place to ask questions and get feedback. It’ll certainly be slower to get to an answer than an AI tool, but I suspect you’ll learn a lot more in the process and more often than not be more correct. I’ve seen multiple posts of people posting AI answers that don’t even compile, no shade on those people!

Do you use AI for your daily programming problems?

I do not, and currently have no intention of starting to use it. I have over 20 years of programming experience (not game specifically), so I don’t feel I need to use it. However, I’ll be honest, I don’t know if this is my “angry old man yells at clouds moment” and I’m just not approaching it with an open mind :person_shrugging:

But right now I don’t feel like I’m missing out or that it’ll increase my productivity in the long run. I also have ethical concerns with all these AI companies.

If so, do you recommend avoiding it entirely?

I’m not convinced it’s the best way for newer programmers to learn more and get better. At best I can see it being a tool for experienced people to double check things or reduce some boiler plate.

Do you think critical thinking will become a rare skill in the future?

I don’t think so, I suspect the hype will eventually die down. My prediction is the companies that replied on it heavily will start to find it impossible to maintain and improve their codebases. Hopefully meaning old farts like me will be in high demand :wink:

4 Likes

Faster isn’t better. In software development you want three things: Speed, Quality, and Cheap. Pick two. You cannot have all three. You are picking Speed and Cheap when you pick AI as a tool. Even if you get something that works it will be difficult to maintain, it will be larger than it needs to be and won’t function as well as something someone spent time making well.

Arthur C. Clarke did when he wrote The Nine Billion Names of God in 1953. It’s a very short story about a computer repair technician of the future. It’s like two pages and worth the read.

Everyone who programs feels that at some point when they realize what they are coding is built on what other people coded before. Then you try to make everything yourself and realize that’s not the way to go either. There is nothing wrong with LLMs. But they are much more helpful if you know more than them. If you have an LLM do a menial task for you that you are used to doing yourself, you can tell if it did it right. If you try to learn from the LLM doing the task, then you have no way of knowing if it is doing it right, and if you trust it, you are creating neural pathways in your brain thinking that the LLM’s way to do something is the right way.

No. It’s not good enough.

I believe most people here aren’t really using LLMs because they don’t work with Godot. It will lie to you. It doesn’t have enough data from scraping the internet to give you good answers, and it combines answers from other languages to fake like it knows what it’s talking about. LLMs are not trained to say, “I don’t know.”

I don’t recommend using LLMs to learn anything. They are machines that recognize patterns and can create things - including text- following patterns that they have learned.

Let’s say that a dog trainer trains a dog. (Yes, this is an absurd example.) The dog was taught by the dog trainer to eat when food appears in it’s bowl by sticking its face in the bowl. It is also taught to only go to the bathroom on newspaper. You have no social skills because you’ve never interacted with people before. You get the dog and decide that since this dog is trained that it knows how to eat and poop in polite company. You learn from it and go to your first dinner party. There you stick your face in your food during dinner. Then you go to the living room. You take a crap on the newspaper on the coffee table. People are appalled. You don’t understand though, you learned from the dog what was completely appropriate behavior. What you didn’t understand is that the dog is just repeating learned behavior. It doesn’t know why it’s doing what it’s doing.

An LLM (AI) is the dog. You are pooping all over the living room because the LLM thinks that’s what you’re supposed to do. It doesn’t know any better.

9 Likes

Thanks for your response !

I will try to be as active as possible on the forum and learn as much as I can, avoiding the fast-food approach to problem-solving, ahah.

I think you’re right: experienced developers will be in high demand in the coming years due to LLMs not only because the codebases will be hard to maintain, but also because there will be a shortage of people with the ability to think independently (I think)

I think it’s a bad tool for learning. Maybe it’s a good tool if you already know what’s going on, like dragonforge-dev said.

1 Like

Isn’t this more like a pair to choose from, like Fast/Cheap and Slow/Quality? But I see what you meant.

I just read it, it’s a really good short story. From my understanding, what Arthur C. Clarke is saying is that in the future, we’ll be able to build machines that do things which defy our own understanding of nature, and so applying logic or thinking for yourself may become useless, beyond the realm of human knowledge.
Not sure if that’s exactly the point though
Loved the end :sparkles:

Totally aggreing with that.

Good analogy :joy:
Do not base your entire way of living on imitating a dog, noted
“You have no social skills because you’ve never interacted with people before.”
I think your analogy makes a good point, though: people will choose AI partly because it requires no human interaction, and that’s easier for them.

I love AI. I think it is one of the most incredible tools yet to be built by humans. It is absolutely amazing at so many things. It is genuinely a bit like magic. I know a lot about LLMs but I am not a victim of the Dunning-Kruger affect, I know that my understanding is as an enthusiastic and interested amateur at best.

My advice would be to never use code you do not understand. Here is a typical conversation with AI.

“Hey Perplexity, what is the best way to approach dealing with A and B given X and Y”.

“The best way is to do this and that… long confident explanation of why”

“But doesn’t that mean I am doing this thing which is really bad”

“Yes you are correct, doing that is really bad, you should do this…” Goes on to completely contradict its original answer.

I recall once asking it how I could do something like “get the relational angle to an enemy that takes into account the 0 and TAU angle differences”. It said something like:

Godot has a built in function for that called get_relational_object_angle() that takes into account rotational direction.

Godot does not, AI hallucinated a completely fictional method. I then asked about it and AI did the “Yes you are correct…” thing and then made up a completely new fictional method.

However, I still ask AI questions, pretty much every day. But you have to realise that some questions AI can help with, and others it can’t. And you have to be really careful when using it. Just like using a hammer, if you are not careful you will crush your fingers. And a hammer does not help you to cut glass. Yes it can break glass, but is not much use if you are trying to etch a complex design.

Using AI to get to the right part of the docs can be great. And often in the methods of the page you landed on are other functions that you think, ooooh, I could try using that!

I would never cut and past code from AI into my code. I would never trust AI advice about the best approach to say a code structure or strategy problem. I might ask it to list common approaches, then I would go and read about them and see if they are actually appropriate.

The docs are generally amazing. Use them, read them, rely on them. AI is a probability based word generator that, amazing as it is, it is not a teacher, a coding expert nor a reliable source of information. We have all seen the ‘use glue to help get your cheese to stick on pizza’ examples. Or that ‘doctors recommend eating at least 2-3 stones a day’. Don’t believe the AI hype, but as a tool that does what it does, it is still amazing.

I often find myself asking it to name a button for me when I don’t want the button to have ‘Quit Game’ written on it. Or to give me suggestions for some text. Again be careful with this, but it is really good at that stuff too.

5 Likes

I’ve been programming professionally for 30 years now. The thing is no one ever chooses two. They always want all three, and then make choices that impact that happening. Companies invariably sacrifice quality, which is why so many companies right now - even the big ones like Microsoft who should know better - are replacing entry-level developers with AI.

They are creating a future problem too which they’re not thinking about which is that there will be less experienced developers in the future because they are not getting trained as entry-level developers now. It’s going to cause a work shortage and gap in the future.

The point, I believe, is that if we aren’t careful, we will lose our own knowledge for the sake of letting computers do things for us. But yeah, I’ve always loved that story.

That’s very insightful. I wouldn’t have thought of that as a benefit, but you’re right. As people get more and more isolated and used to interacting online and through texts and posts, it would make sense to be more comfortable learning from a machine than another person.

1 Like

Yeah that’s the other thing AI does. It cannot admit it’s wrong because it is not aware.

I had not, but that’s hilarious and I’m going to file that away.

That’s a good idea. I actually came up with a game jam game name because of a song title AI came up with recently.

1 Like

As a follow-up to this, there was another short story I read in a science fiction anthology of short stories. The story was about a man raising his son with a robotic (clockwork) nanny at the turn of the 19th century. For years he could not interact with or learn from people because he’d only known the the robot nanny. And so he was put in an insane asylum. Until a psychologist found the robot nanny and had it fixed. He was then able to re-enter society because they used the robot to show him how to interact with humans again.

I have been using AI to ask theoretical questions of godot so I don’t have to bother anyone here if it was already asked many times before. :grinning_face: When it comes to actual code examples, Ai has basically been 100% wrong in every example where I had to manually fix it just to make it work. :pensive_face:
That’s really why I prefer the forums with actual humans, but then, other developer forums are often over-run with “Ai copy/paste” responses to questions. I don’t see a lot of that here, which is nice. :+1:

1 Like

We have rules against it and report any post that looks like an AI response.

2 Likes

I think using generative ai is a great idea, it’s just not something I have a habit for. Just worth noting that ChatGPT was trained on the internet of 2019, and a lot changes.
Coders in the old days used to copy and paste components from other demos anyway. The AI supplies an abstraction layer between the coder and the original source (if any), and even generates new ideas.

On the infrastructure scale of software maintenance, a decent LLM would be awesome for the job of maintaining an operating system and multiple libraries with dependency relationships - the system configuration - because it’s really annoying when a human error, attention or negligence, during an update, causes an integral system component, or component of an application suite /structure / software ecosystem to fail.

That was GPT-2 which was what originally powered Chat GPT when it came out in the fall of 2019. GPT-4 was trained on 2023 data, and was what was driving Chat GPT since for the past few years (though there was a 4.5 version). GPT-5, which was released by OpenAI 3 days ago, and is now what you are using if you use Chat GPT, was trained on data up to 2025.

Also, Chat GPT is only one LLM and first to market. They all have the same problems though.

This statement makes it sound like LLMs think and reason. They do not. They are just fantastic pattern matchers. Often the new ideas they generate are hallucinations. One of the most common hallucinations that LLMs suffer from when asked about Godot is making up functions that do not exist and explaining in great detail - with code examples - how to use them.

Also since LLMs scrape the web, they often cannot tell the difference from a correct an incorrect answer - and most of the data comes from these forums. (You can tell in the format of the answers.)

I personally would not describe them as an abstraction layer. I would describe them as an obfuscation layer. Especially considering the rise in posts here that start with something like: “I used an LLM to make this code. I don’t understand how it works or how to fix it. The AI can’t fix it either. Someone please help me.

It would certainly be an interesting experiment. Though not one I would take without precautions. Most service contracts for web services have a 99.99% uptime clause. If your AI takes your server out completely for a few days over the course of a year you’re going to owe lots of money. Services like finance and medical usually have three nines, i.e. 99.999% uptime, which means if your service is out for more than 8 hours in a year you’re in breach of contract.

Having said that, I do think it’s going to become more common in the future - but it’s going to probably going to need to be specialized models doing that kind of maintenance and some people will be bit by downtimes. still I’d love it if AI managed my infrastructure because I hate dealing with AWS.