Demotivation using AI

Hey,

This is my first time joining any kind of forum, and the main reason is that I think I’m using generative AI too much to help me solve programming problems,
finding the best fitting pattern, the right algorithm, etc. And it’s demotivating me from making any kind of cool project.

The reason I rely so heavily on generative AI is that I want to find the “best” or “perfect” solution right away.
It often gives me something that feels right at first, but isn’t truly fitting in practice.
Also, getting instant feedback on a first “not-so-good” idea feels rewarding, like a dopamine rush that comes from feeling like I’ve found the solution.

But I know that in the long run, if I want to really learn something, I’ll have to make the effort to dig deep into whatever technology I want to master.

Still, knowing that there’s a tool that can make things ten times faster makes me say, “Why not use it? If everyone else is using it.”
It’s tempting, like a god that gives you seemingly correct answers, but they’re only approximately right.

So I think joining a forum can be beneficial.
I want to be able to discuss the problems I run into instead of just prompting endlessly, and actually come to conclusions on my own,
learning and developing critical thinking skills through conversation.

Do you think critical thinking will become a rare skill in the future? I feel like more and more people just trust whatever AI says.

Do you ever feel like that, like you’re down because you didn’t build the thing yourself?
Do you use AI for your daily programming problems? If so, do you recommend avoiding it entirely? Or maybe setting limits, like asking it not to give code or direct answers?

Leave your thoughts on this. Feels like some kind of AA for generative AI users :’)

3 Likes

Welcome to the forum!

This is a really great place to ask questions and get feedback. It’ll certainly be slower to get to an answer than an AI tool, but I suspect you’ll learn a lot more in the process and more often than not be more correct. I’ve seen multiple posts of people posting AI answers that don’t even compile, no shade on those people!

Do you use AI for your daily programming problems?

I do not, and currently have no intention of starting to use it. I have over 20 years of programming experience (not game specifically), so I don’t feel I need to use it. However, I’ll be honest, I don’t know if this is my “angry old man yells at clouds moment” and I’m just not approaching it with an open mind :person_shrugging:

But right now I don’t feel like I’m missing out or that it’ll increase my productivity in the long run. I also have ethical concerns with all these AI companies.

If so, do you recommend avoiding it entirely?

I’m not convinced it’s the best way for newer programmers to learn more and get better. At best I can see it being a tool for experienced people to double check things or reduce some boiler plate.

Do you think critical thinking will become a rare skill in the future?

I don’t think so, I suspect the hype will eventually die down. My prediction is the companies that replied on it heavily will start to find it impossible to maintain and improve their codebases. Hopefully meaning old farts like me will be in high demand :wink:

4 Likes

Faster isn’t better. In software development you want three things: Speed, Quality, and Cheap. Pick two. You cannot have all three. You are picking Speed and Cheap when you pick AI as a tool. Even if you get something that works it will be difficult to maintain, it will be larger than it needs to be and won’t function as well as something someone spent time making well.

Arthur C. Clarke did when he wrote The Nine Billion Names of God in 1953. It’s a very short story about a computer repair technician of the future. It’s like two pages and worth the read.

Everyone who programs feels that at some point when they realize what they are coding is built on what other people coded before. Then you try to make everything yourself and realize that’s not the way to go either. There is nothing wrong with LLMs. But they are much more helpful if you know more than them. If you have an LLM do a menial task for you that you are used to doing yourself, you can tell if it did it right. If you try to learn from the LLM doing the task, then you have no way of knowing if it is doing it right, and if you trust it, you are creating neural pathways in your brain thinking that the LLM’s way to do something is the right way.

No. It’s not good enough.

I believe most people here aren’t really using LLMs because they don’t work with Godot. It will lie to you. It doesn’t have enough data from scraping the internet to give you good answers, and it combines answers from other languages to fake like it knows what it’s talking about. LLMs are not trained to say, “I don’t know.”

I don’t recommend using LLMs to learn anything. They are machines that recognize patterns and can create things - including text- following patterns that they have learned.

Let’s say that a dog trainer trains a dog. (Yes, this is an absurd example.) The dog was taught by the dog trainer to eat when food appears in it’s bowl by sticking its face in the bowl. It is also taught to only go to the bathroom on newspaper. You have no social skills because you’ve never interacted with people before. You get the dog and decide that since this dog is trained that it knows how to eat and poop in polite company. You learn from it and go to your first dinner party. There you stick your face in your food during dinner. Then you go to the living room. You take a crap on the newspaper on the coffee table. People are appalled. You don’t understand though, you learned from the dog what was completely appropriate behavior. What you didn’t understand is that the dog is just repeating learned behavior. It doesn’t know why it’s doing what it’s doing.

An LLM (AI) is the dog. You are pooping all over the living room because the LLM thinks that’s what you’re supposed to do. It doesn’t know any better.

13 Likes

Thanks for your response !

I will try to be as active as possible on the forum and learn as much as I can, avoiding the fast-food approach to problem-solving, ahah.

I think you’re right: experienced developers will be in high demand in the coming years due to LLMs not only because the codebases will be hard to maintain, but also because there will be a shortage of people with the ability to think independently (I think)

I think it’s a bad tool for learning. Maybe it’s a good tool if you already know what’s going on, like dragonforge-dev said.

1 Like

Isn’t this more like a pair to choose from, like Fast/Cheap and Slow/Quality? But I see what you meant.

I just read it, it’s a really good short story. From my understanding, what Arthur C. Clarke is saying is that in the future, we’ll be able to build machines that do things which defy our own understanding of nature, and so applying logic or thinking for yourself may become useless, beyond the realm of human knowledge.
Not sure if that’s exactly the point though
Loved the end :sparkles:

Totally aggreing with that.

Good analogy :joy:
Do not base your entire way of living on imitating a dog, noted
“You have no social skills because you’ve never interacted with people before.”
I think your analogy makes a good point, though: people will choose AI partly because it requires no human interaction, and that’s easier for them.

I love AI. I think it is one of the most incredible tools yet to be built by humans. It is absolutely amazing at so many things. It is genuinely a bit like magic. I know a lot about LLMs but I am not a victim of the Dunning-Kruger affect, I know that my understanding is as an enthusiastic and interested amateur at best.

My advice would be to never use code you do not understand. Here is a typical conversation with AI.

“Hey Perplexity, what is the best way to approach dealing with A and B given X and Y”.

“The best way is to do this and that… long confident explanation of why”

“But doesn’t that mean I am doing this thing which is really bad”

“Yes you are correct, doing that is really bad, you should do this…” Goes on to completely contradict its original answer.

I recall once asking it how I could do something like “get the relational angle to an enemy that takes into account the 0 and TAU angle differences”. It said something like:

Godot has a built in function for that called get_relational_object_angle() that takes into account rotational direction.

Godot does not, AI hallucinated a completely fictional method. I then asked about it and AI did the “Yes you are correct…” thing and then made up a completely new fictional method.

However, I still ask AI questions, pretty much every day. But you have to realise that some questions AI can help with, and others it can’t. And you have to be really careful when using it. Just like using a hammer, if you are not careful you will crush your fingers. And a hammer does not help you to cut glass. Yes it can break glass, but is not much use if you are trying to etch a complex design.

Using AI to get to the right part of the docs can be great. And often in the methods of the page you landed on are other functions that you think, ooooh, I could try using that!

I would never cut and past code from AI into my code. I would never trust AI advice about the best approach to say a code structure or strategy problem. I might ask it to list common approaches, then I would go and read about them and see if they are actually appropriate.

The docs are generally amazing. Use them, read them, rely on them. AI is a probability based word generator that, amazing as it is, it is not a teacher, a coding expert nor a reliable source of information. We have all seen the ‘use glue to help get your cheese to stick on pizza’ examples. Or that ‘doctors recommend eating at least 2-3 stones a day’. Don’t believe the AI hype, but as a tool that does what it does, it is still amazing.

I often find myself asking it to name a button for me when I don’t want the button to have ‘Quit Game’ written on it. Or to give me suggestions for some text. Again be careful with this, but it is really good at that stuff too.

6 Likes

I’ve been programming professionally for 30 years now. The thing is no one ever chooses two. They always want all three, and then make choices that impact that happening. Companies invariably sacrifice quality, which is why so many companies right now - even the big ones like Microsoft who should know better - are replacing entry-level developers with AI.

They are creating a future problem too which they’re not thinking about which is that there will be less experienced developers in the future because they are not getting trained as entry-level developers now. It’s going to cause a work shortage and gap in the future.

The point, I believe, is that if we aren’t careful, we will lose our own knowledge for the sake of letting computers do things for us. But yeah, I’ve always loved that story.

That’s very insightful. I wouldn’t have thought of that as a benefit, but you’re right. As people get more and more isolated and used to interacting online and through texts and posts, it would make sense to be more comfortable learning from a machine than another person.

2 Likes

Yeah that’s the other thing AI does. It cannot admit it’s wrong because it is not aware.

I had not, but that’s hilarious and I’m going to file that away.

That’s a good idea. I actually came up with a game jam game name because of a song title AI came up with recently.

1 Like

As a follow-up to this, there was another short story I read in a science fiction anthology of short stories. The story was about a man raising his son with a robotic (clockwork) nanny at the turn of the 19th century. For years he could not interact with or learn from people because he’d only known the the robot nanny. And so he was put in an insane asylum. Until a psychologist found the robot nanny and had it fixed. He was then able to re-enter society because they used the robot to show him how to interact with humans again.

1 Like

I have been using AI to ask theoretical questions of godot so I don’t have to bother anyone here if it was already asked many times before. :grinning_face: When it comes to actual code examples, Ai has basically been 100% wrong in every example where I had to manually fix it just to make it work. :pensive_face:
That’s really why I prefer the forums with actual humans, but then, other developer forums are often over-run with “Ai copy/paste” responses to questions. I don’t see a lot of that here, which is nice. :+1:

1 Like

We have rules against it and report any post that looks like an AI response.

2 Likes

I think using generative ai is a great idea, it’s just not something I have a habit for. Just worth noting that ChatGPT was trained on the internet of 2019, and a lot changes.
Coders in the old days used to copy and paste components from other demos anyway. The AI supplies an abstraction layer between the coder and the original source (if any), and even generates new ideas.

On the infrastructure scale of software maintenance, a decent LLM would be awesome for the job of maintaining an operating system and multiple libraries with dependency relationships - the system configuration - because it’s really annoying when a human error, attention or negligence, during an update, causes an integral system component, or component of an application suite /structure / software ecosystem to fail.

That was GPT-2 which was what originally powered Chat GPT when it came out in the fall of 2019. GPT-4 was trained on 2023 data, and was what was driving Chat GPT since for the past few years (though there was a 4.5 version). GPT-5, which was released by OpenAI 3 days ago, and is now what you are using if you use Chat GPT, was trained on data up to 2025.

Also, Chat GPT is only one LLM and first to market. They all have the same problems though.

This statement makes it sound like LLMs think and reason. They do not. They are just fantastic pattern matchers. Often the new ideas they generate are hallucinations. One of the most common hallucinations that LLMs suffer from when asked about Godot is making up functions that do not exist and explaining in great detail - with code examples - how to use them.

Also since LLMs scrape the web, they often cannot tell the difference from a correct and incorrect answer - and most of the data comes from these forums. (You can tell in the format of the answers.)

I personally would not describe them as an abstraction layer. I would describe them as an obfuscation layer. Especially considering the rise in posts here that start with something like: “I used an LLM to make this code. I don’t understand how it works or how to fix it. The AI can’t fix it either. Someone please help me.

It would certainly be an interesting experiment. Though not one I would take without precautions. Most service contracts for web services have a 99.99% uptime clause. If your AI takes your server out completely for a few days over the course of a year you’re going to owe lots of money. Services like finance and medical usually have three nines, i.e. 99.999% uptime, which means if your service is out for more than 8 hours in a year you’re in breach of contract.

Having said that, I do think it’s going to become more common in the future - but it’s going to probably going to need to be specialized models doing that kind of maintenance and some people will be bit by downtimes. still I’d love it if AI managed my infrastructure because I hate dealing with AWS.

3 Likes

Hi,

Thanks for clarifying about GPT 5 and the data it was trained on. There are of course open source LLMs that could be retrained for software specifically.

I would disagree with the statement that my comment makes the AI sound like the LLM thinks and reasons - I only said ‘the AI’ supplies an abstraction layer from the original source - to discect this I refer to ‘the AI’ as the collection of software systems including the LLM. Software can ‘supply’ a service, so I am not humanizing it when I say that it ‘supplies’ an abstraction layer. The term ‘abstraction layer’ is simply used here to define some type of ‘author hiding’ code reuse. But the term ‘abstraction layer’ is used to describe software components that hide data. - typically the implementation of an algorithm on a library that the user doesn’t need to read.
When I said it even generates new ideas, I mean that in the sense that ‘generative AI’ generates new ideas.

Now I don’t want to get into a discussion about the ethics of code copying or anything - I just happen to think AI usage is going to be very normal, perhaps even to the level that compiler use is normal - I don’t ever see purists arguing that people should not use scripting engines or compilers because it’s just not Thier work.

Are you sure that Microsoft cannot manage their system configuration with an AI leveraging an LLM? Some institutions have invested in supercomputers I can only guess that the chat gpt the general public can interact with uses a tiny fraction … Etc.

I use chatgpt (free version) to have a quick start, especially when I’m stuck. It gives me a starting point. For godot, the answer quality is ‘meh’ at best. Usually very wrong :slight_smile: but again, can be good to give you a starting point. And then once my project gets bigger and I got more experience and get more understanding on what I’m actually building, I usually change that suggestion.

1 Like

There are. I’ve used them. I started using Microsoft’s Copilot for GitHub the week it was released. I’ve also used LLMs specifically designed for coding and run them locally using oLlama. As well as a couple people tried spinning up specifically for Godot.

Ok. That was my perception.

Abstraction is one of four OOP (Object-Oriented Programming) pillars. Specifically it is, as you say, a hiding of implementation details from someone who uses an object. An abstraction layer does the same thing and is representative of hiding a great number of implementation details and providing something like an API (Application Programming Interface) to utilize those details.

If you are saying that the LLM hides how it comes up with what it spews out as an abstraction layer to what’s going on in its decision-making process, I can see what you’re getting at.

I interpreted it as you were saying that the LLM is an abstraction layer between the user and the code so that the user asking questions didn’t have to know how the code works to use it. In which case I do not agree with you for two reasons first. First, it does not give clear - or even correct - answers. Second, hiding implementation details when talking about learning those same implementation details seems counterproductive to me.

Especially as people keep coming here asking how AI came up with code when the answer is no one knows. LLMs are pattern matchers. They are fed data and told - these match. Using it’s own internal sorting system an LLM then looks at new data and using a statistical model it decides how to categorize information. When you get to something the size of GPT or any other model out there - they are making their own decisions based on statistical math formulas and humans are not checking every assumption they make. This is why I said it’s more of an obfuscation layer than abstraction layer.

I didn’t bring that up - you did. So I’m just going to leave it there.

I didn’t say anything about Microsoft. I had no idea you were talking about Microsoft before. In fact what I said was that I believe it will happen in the industry in the future and I look forward to that.


I have worked at very large companies, and I can tell you even mid-level companies have been figuring out how to leverage LLMs since the fall of 2019. Supercomputers are not really a thing anymore the way you are likely thinking of them. When the PS3 came out, it had the ability to form clusters for processing. The US Airforce famously used 1,760 of them to create an actual working supercomputer. This was around 2010.

Since then cloud computing has become more and more popular. Most processing power is handled by cloud computing services where you rent space and servers are spun up and down to accommodate your purposes. It’s much cheaper to rent time for the processing power you need than own a supercomputer that is going to become obsolete very quickly.

However AI requires a different type of processing power. Specifically the processing power provided by GPUs (Graphics Processing Unit), not CPUs (Central Processing Unit). Typical cloud servers are made up of CPUs and RAM. They do not have any GPUs. The reason NVIDIA shot up in the stock market is because they create GPUs and large companies that do AI are buying everything they can produce as fast as they can produce it.

So we are currently in a technology race, where AI companies want as much processing power as possible that they own so they can beat their competitors. Every time someone makes a request of Chat GPT, OpenAI is running it on a massive array of GPUs they have - which yes could be considered a modern supercomputer analog.

Your last statement is uniformed I assume, as you said you can only guess. Using LLMs is actually very expensive. A big story in April was that people saying “Please” “Thank You” to Chat GPT last year cost OpenAI 10’s of millions of dollars. In June he posted that every Chat GPT query uses about 1/15th a teaspoon of water for cooling alone. With billions of queries a day, that ramps up to 85,000 gallons of water per day per billion queries. The amount 1,000 homes use in a day. Regardless of the ecological consequences, a company running an AI GPU farm must have access to and pay for that water.

That’s not to say that companies like Google, Microsoft and Meta cannot afford it. They can. But OpenAI estimated that 10% of the world’s population used Chat GPT last year. It is not an insignificant amount of usage.

Finally, while an LLM can eventually do the job, I think you will find that some tasks will be cheaper with human labor than AI labor. Especially as LLMs continue to put white collar workers out of jobs, unemployment rises, and labor costs go down. Who know though? We will see what happens.

3 Likes

yes that’s what i mean. In the particular context the source that was used to train it or references that it contributed to the eventual answer.

I have never heard the term spinning up but I haven’t been actively trying to build / retrain or even use LLM’s. I had a Jetson Nano in 2019 before (I heard of) the big LLM boom and even did a couple of deep learning courses using Python. The concept of spinning them up for Godot might work a bit like “transfer learning” on a pretrained model, or training from scratch with no prior training sets.

Not the intended meaning, however, perhaps they don’t. That would normally be a possible state for people using libraries, modules and plugins, or even copying and pasting optimized math.

I see, yes you are right, they are very mysterious. I like the Bluevsbrown youtube video about the internals of LLM’s for reference.

Ok I mentioned Microsoft because I thought someone in a post above mentioned that Microsoft has fired a large number of employees - with AI or LLM usage given as the reason. A tool of that type could also be applied to Linux distros or Apple computers. With the constant riddles of configuring Linux to run different types of software i can imagine an AI grinding through checking dependencies would be very helpful. I wouldn’t be surprised if Microsoft did have a decent cloud supercomputing cluster.

I have to admit I said one of those ‘thank you’ messages and it was even after CharGPT made a mistake. I would think they could train ChatGPT on user interactions, and the lack of negative response to ChatGPT making a mistake would also cost them some training time. Perhaps people would log on and deliberately train ChatGPT to do the opposite of ‘positive’.

The big chip companies are making the processors much more energy efficient. They probably use AI to design their chips. But there would probably be an evolutionary process … design, build, test, compare … until chips become optimal enough for the production line.

One point we haven’t covered that I wanted to mention is that using LLM’s for coding isn’t the end of the story … they could do well inside a looping process that even runs and tests the code before its ready for the user. With more compute resources (and time) the AI could be writing perfect software. I’ve seen loads of research articles that appear on my newsfeed about different pipelines that involve feeding the LLM output back into the LLM for review .

1 Like

Hey dude, I feel you honestly.
I’m a beginner too and in the past I used to rely a lot on AI and tutorials.

Do you use AI for your daily programming problems?

Not anymore, in the past, on my first projects, I relied HEAVILY on AI and tutorials to a point where games where almost writing themselves in the worst way possibles.

The reason I rely so heavily on generative AI is that I want the brains the “best” or “perfect” solution right away.

And that’s literally the opposite of what you’ll get by using AI. AI it’s trained on snippets of data all over the internet and will give you almost every time a very unoptomized response, or worse, a response that doesn’t work at all and makes your code even messier, because AI is not a living being, it can’t see and understand your VISION of how the game should look and behave.
Only you can.

That does not mean that you shouldn’t use it at all, I’m gonna be honest in my latest projects there were 1 or 2 times where I opened copilot, showed it some parts of my code and asked it a solution to a bug I couldn’t find, and some times it can spit out some slightly sensed response.
AI is only good at coding where it has to do something simple and established, if you ask it to code something that prints “hello” in gdscript or java, it’ll do it, if you ask it to a flappy bird clone on python it’ll do it in 8 seconds and 200 lines of code, because those are simple established things that have been done thousands of times. But if you ask it to make your full complex vision of a game, it won’t be able to, it’ll never understand what you actually want.

Do you think critical thinking will become a rare skill in the future?

I don’t think so and AI itself is a reason why, at least for the recent future, this won’t happen.
As time passes I think (and hope) people will evolve to not trust whatever a machine says without even questioning it. As time passes we’re getting more and more regulamentations on AI and I’m sure that soon there will be proper lessons in schools teaching about AI and how to use it properly (they are already happening, but they’re not mandatory, I’ve been at some).
The whole point of schools is (or should be)to teach young minds critical thinking and skills that they’ll need in the future, and if they really wanna do it then they’re probably going to have to teach about AI, that really requires critical thinking to not fall victim of just letting a machine think for us.

And in my opinion AI will never be perfect, there’s too much data to be processed and it keeps growing, humanity keeps evolving and its needs keep changing.

Do you ever feel like that, like you’re down because you didn’t build the thing yourself?

Yeah sometimes I thought “come on, this is not something I did in my ken, I asked a machine to tell me what to do” but with time that feeling disappeared as I relied on AI less and less.

If so, do you recommend avoiding it entirely?

As you may have already understood from what I wrote, no.
If you really are in need of an answer it’s not bad to ask, but what I advice you to do it to analyse the code AI wrote and try to understand what it is telling you to put in your project, you can even ask it to explain it so you can get a grasp on it and understand if it comprehended your vision.

But asking to the forum it’s always better, patience will most likely be rewarded (and you probably won’t have to wait much, there’s always a lot of people lurking around here.).
This leads me to my next point:

So I thinking joining a forum can be beneficial.

Here you’re absolutely right, because Godot it’s different from other engines, as Godot it’s more the people than the engine itself, because it’s an open source engine, everyone can work on it to make it better and it’s completely documented, every node and every premade function is written in the docs and many people know how to use them properly and asking here they will probably try to make you understand.

But if you really cannot wait… check the docs yourself!
Here’s the link

My last advice for you is that tutorials are another highly useful resource, but only if you se them properly.
It depends in what tutorials you use so you don’t get stuck in tutorial hell.

I’d strongly advise NOT to watch something like “How to make Pokémon Sun and Moon in Godot” and i mean not to watch tutorials that explain step by step how to make a game, as you’d only understand how to use the various nodes and scripts mostly in that specific cases. (maybe you could watch a “how to make your first game” video for your first project but it seems like you have some slight experience with the engine at the moment)

I’d instead advise you to watch things related to some widely used specific elements.
For example, on my latest project I used a tutorial on how to make a proper pause menu to understand how that could work and used it to make a menu that is both a pause menu and a sort of “in level shop”.
And when I need I can go back to my old project, see how I coded it, or I could try to make it just by thinking now that I have a basic idea on how to make things like that work, or I could even go back to the tutorial and watch it to refresh my memory.

An EXAMPLE of the type tutorial that I think you should watch is:

This is a tutorial that tries to teach viewers about control nodes and how they work, and as you see it tries to teach about specific nodes and how they can be used, not a specific case in which they are used.

Well, this was all I could tell you in my opinion
Sorry to be late to the party but I wanted to say my 2 cents on the situation as I saw that you were getting some responses by many experts and I thought that having both responses from more expert users and other newbies could be helpful.

Also sorry for my English, it’s not my native language so I might have made some mistakes.

I wish you luck with your future projects!

3 Likes

After working on 3d content for a massive open world game, i realized i can honestly say id be happy if i could use script / geometry nodes or even AI to accomplish the same task. The work of a real artist is usually preferred to procedural generated stuff, however … i wasnt trained as an artist anyway … these days im more like an indy project manager and game designer.

1 Like

I just found a helpful AI workflow for roguelike devs or anyone making a large quantity of items for a game: Enter the descriptions of every item you have into NotebookLM (a free Google service) and use it as ‘training data’ to have the model output new ideas. Most of them will be garbage, but if you generate lots, it does get the creative juices flowing. With just 30 unique items in my game, I was able to quickly create 4 new ones with AI’s ideas. Keep in mind not to just use the AI output, but to use your own creativity and think of it as being inspired by your own existing work.

Here’s the link if anyone wants to try it:

For my source of data, I pasted all my item info into a Google Doc.

3 Likes