Opinions on using AI for scripting

I agree. But we live in a world with goverments who doesn’t do anything about it. And we choose those poeple. So that’s why we are in this situation.

I also started to use it for a function that I know I couldn’t figure out by myself. And it was helpful for me. But I know how to use it, what to expect, how to check the ‘AI’ code. So it worked for me. A lot of ‘beginners’ probably won’t have the same experience.

I also know that it’s ethically dubious at best. But noone is perfect and we all do our best to judge the situations and sometimes be selfish if we can ‘justify’ it.

For me, the line is AI generated art. I’m pretty sure I’ll never use it unless I know the tool is really ethical.

2 Likes

I see that you are upset about someone stealing something, but the beginning of that talk must have happened somewhere else because I don’t see it discussed here, and so it is hard to make complete sense of what you are saying. Perhaps if you described first what you mean by “this wole mess”?

1 Like

To be fair: it has been discussed a lot on this forum and if you did not find any topic, this might boil down to you just not looking, really. If i search for “using ai”, there is plenty to read, friend.

As a lot of the usual pro and con arguments have been repeated here, I still find two important points for me missing:

  1. The whole LLM market is economically not sustainable and filled with speculation and shady cross-financing. This is, historically speaking, a very bad sign for a pronounced “new, revolutionizing technology”. Especially, the disconnect between prices being brought down to the “customer” (namely you and anybody currently using any LLM to help you work) and actual costs (from manufacturing the hardware needed for calculations, building server infrastructure, water, power, economical overhead like actual companies etc.). Anything we pay right now is not in any way the actual cost of producing and running any of this. This is an old-school market battle and as soon as the big server/cloud centers are built and only two companies stand on the field, prices will probably skyrocket. We are in a weird situation right now, where the actual numbers are in the dark, because the country with all major players is “having a weird moment” and obstructing any actual access to simple questions like “how much water do you waste to cool your server infrastructure?” and “how much power do you use per month in total?”. Power ain’t free, right? Which leads me to my second point:
  2. Resources are limited and some stuff will only get more expensive and more limited. No matter where you stand towards an energy transition to renewables, I guess we can agree on two things: Cheap energy would be great, and a lot of it will be needed in the future. Even if you look at the very kind estimates from OpenAI alone: 14.5 MILLION kWh in a year - or as much as one of 117 smaller countries per year. And that are estimates from before the whole rush we had in the past 1.5 years, with their old gpt4 model. It’s hard to tell, what the rest of our great technological leaders are doing, but judging by every damn search engine now has generated summaries and such, I guess, we are draining a lot of power.
    And for the most part, we are burning a lot of stuff to get that energy.

The numbers are so big, it’s hard to wrap my head around.
For me, it’s really not principle or moral high ground. It’s about waging cost and benefit against each other and compared to other technological breakthroughs in the past 150 years, this whole LLM thing is really not delivering anything for me. I’d rather see all that money invested into renewable infrastructure, batteries everywhere, solar parks, wind and water turbines and future-proof electrical grids with the goal of free electricity for everybody.

6 Likes

So share the links here, why be mean as if I offended you personally. Are you sure the “using ai” topics you saw aren’t dsiscussing the npc ai ?

1 Like

Not intended - you did not offend me or anybody, obviously. Though you are wasting a bit of peoples time by not using the search function. But, and again, I don’t want to attack you, maybe you’ve been using LLMs too much lately :wink:

Look here:
https://forum.godotengine.org/search?q=using%20ai

As a new user, I can’t post multiple links here, but go ahead, I bet you will find at least 3 topics where this question is talked about in lengths.

3 Likes

Sure. Not going to be proving anythin. Just gauging how much there is ability to reason behind your opinion, is fine by me.

1 Like

Sure, let me help you with that:

and

You’re trolling, right? :slight_smile:

2 Likes

Why do you want “Opinions” on this topic if you shut down every single opinion that doesn’t align with yours as “I understand you’re just mad”

6 Likes

The opinions I asked for are there for anyone to read and there are no shot wunds on them.
Then, it is also apparent that some people here are irrationally upset about AI, and project that onto me for asking. If you speak nonsense out of emotion, rather than reason, I will point that out and feel no guilt about that. You are welcomed to do the same. Assuming our goal here is to learn from one another, that is how we should proceed.

None of those examples are LLMs, which despite the topic not spelling it out, is what we are talking about here, as this discussion actually started in another thread.

AI is a broad term and was coined as a term in 1955 as a field of study. In Nov of 2022 it very quickly entered the global lexicon as a noun with the release of ChatGPT. None of those features are technically AI-powered though.

Not really true currently, but an interesting point. Because at some point we will end up using the products of LLMs. Such as a plugin that someone else makes for Godot using AI and we use it.

That’s a valid thought. That’s why we have copyright laws in fact. But sadly those laws are being challenged and in some cases upended in courts. AI companies have gotten away with a LOT more than I expected two years ago. I thought the case against Midjourney was going to put the kibash on AI-generated artwork, and here we are with Sora 2 in the news and people complaining in the news but no real legislation around AI still.

Sadly, the money goes deep and there’s only so much we can control. It’s like recycling. I separate out all my recyclables, but I know for a fact that in the last three cities I’ve lived that it all goes to the dump and I don’t have power to change that.

Yup.

I used to agree strongly with you. I’ve started to change my view. I was thinking about this last night. It is a tool that happens to be new and it is a big disruption. Like the sewing machine was in the 1800s. Back then there were labor strikes from people losing their jobs. The companies didn’t care and moved forward.

As long as companies are driving this technology, it isn’t going to stop. So I’ve decided to embrace even AI-generated art so I can understand it. Because I’m a programmer and I want to continue to be employed. Even if I’m not working on or using AI directly in my work yet, I want to stay on the cutting edge. Plus, it’s a lot easier to reject a tool in a professional setting when you’ve used it and can speak to its financial and time drawbacks.

But I’ve also been thinking about the ethics of not using AI-generated art. and the truth is that I feel my past rejection of it was performative. (Not that I’m accusing anyone else of that - to be clear.) This is because while I wasn’t using it because I am an artist who draws and paints, and musician who plays guitar and sings - and I want to support in solidarity artist friends who make a living with their art skills (which is REALLY hard btw.) But it turns out that me boycotting AI use didn’t affect the market at all. Instead, I fell behind.

Then I found Suno and I got to make music that I really enjoyed. The tool may go away because it gets too expensive to use - but it isn’t going away.

True. I’m waiting for the AI bubble to burst. But I don’t think it’s going to go away - it’s just going to be inaccessible to the non-rich, non-corporate entities of the world.

Again, agreed. But like recycling, I feel like my participation or lack of it doesn’t affect the corporate decision makers who can actually affect this. The market drives this, not whether or not we use it.

In the 1800s you were betraying people by using a sewing machine. That’s not the case now.

7 Likes

Think we can sum up the things said so far now. From this and other topics, I can identify the following opinions:

Pros:

  • In many cases replaces the “traditional” sources such as search or forum discussions
  • Saves time overall (even considering incorrect answers)
  • Explains well the things it does known (those widely discussed and well described in training texts)
  • Helps with examples, general principles, patterns, and concepts (when you know WHAT to do but not HOW)
  • Helps with complex mathematics (opinion about ChatGPT)
  • Helps with “game plans” (opinion about ChatGPT)
  • Assists with simple everyday tasks
  • Helps with Godot and C# (opinion about Perplexity)
  • Doesn’t mock you for asking stupid questions
  • An essential skill for the future, on par with reading documentation and searching

Cons:

  • Copying ready-made code has little educational value (is this actually specific to AI?)
  • Copying ready-made code comes with an unclear license (is this actually specific to AI?)
  • Can’t always distinguish between Godot 3 and Godot 4
  • Can confidently lie/hallucinate
  • Does not provide the context available with the other sources, such as date of the discussion, versions of discussed products etc
  • Struggles with math
  • Wastes more time than it saves

There have also been doubts expressed regarding the economic and environmental sustainability of the AI in the future, this is not ignored but seem to have no bearing on its present daily availability or usability.

Earlier related topics:
Demotivation using AI - General - Godot Forum
Is it bad to use CHATGPT to help me learn specific situations in Godot? - Help - Godot Forum
Ai is taking over coding lol - General - Godot Forum

And external links
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity - METR
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity - Apple Machine Learning Research
stitionai/devika: Devika is now Opcode

3 Likes

@rpahut Remains to be seen if it’s a time saver, that’s just an opinion so far, and the little studies there are on the topic point to the contrary as far as experienced software engineers are concerned.

Civilized debate is good!
Cheers!

3 Likes

Of course, all of those are opinions. The study linked here is way too thin - to what? to make us quit using AI? I am not sure what it’s significance is in this, really.

1 Like

“AI” is a massive drain on our society and using it is a crutch which prevents you from becoming a better developer. Using it in general is a shortcut to actually thinking, which prevents you from becoming a smarter person. Large language models have their place: translation is probably the most notable, as it’s what a neural network is best suited to. Using them to avoid learning a skill you actually need is a mistake.

6 Likes

This is a commonly-cited excuse for using “AI,” but “AI” is not actually comparable to the invention of the sewing machine or the computer. Also, apropos of the topic, the Luddites were not anti-technology but simply a labor movement seeking to secure a better living for themselves in a time when companies were using automation to crush workers. They weren’t against sewing machines. They destroyed them only as a form of industrial sabotage.

LLMs do not perform a job better and faster than a human, like the sewing machine does with sewing, or the computer does with computing. It does a statistical approximation of the job. The entire purpose of it is to be “like a human,” so it is not better than a human, and never will be – we still don’t even understand how the human brain works, after all, and possibly never will. I am a programmer and will never use LLMs in my career, because programming is not just about generating code, and LLMs will never be able to do what I do.

You were never betraying anyone by using a sewing machine (especially in your own home, wtf?), only by laying off your skilled tailors en masse to replace them with cheaper ones using sewing machines. This is a misunderstanding of history. AI is “like the sewing machine” only in that it is used by corporations to try to drive down wages. But it’s not successful in this, because in truth, compared to a skilled human, it sucks.

Early computers and early sewing machines were imperfect and improved, but they were never worse than a human at their respective tasks. This is another common neural network talking point: “it will continue to get better.” But it won’t, much like transistors in an integrated circuit will not continue to get smaller forever. Real life has physical limitations, and LLMs fundamentally have no understanding of what they are doing and so will never be able to validate their work. The models are improving mainly by having things which aren’t neural networks bolted on top of them: e.g. replacing their shoddy math with an actual program to compute the answer for them.

If you enjoy AI-generated slop, that’s your prerogative. I disagree with the notion that anyone is falling behind by not using it. Feel free to check in with me in a decade if you think I’m wrong. I suspect I’ll still be employed.

As a postface, I played around with GPT2 when it was new, novel, and relatively unknown, like many people. So I am not ignorant of the technology or how it works. I simply quickly realized its limitations and moved on before the mass marketing-induced hysteria gripped society.

8 Likes

Very much agree with all of this. From what I can tell, the people who use and advocate for the use of LLMs the most for programming are the ones who are just starting out and never had to build any complicated system with lots of moving parts.

Just because the code an LLM gave you “work”, doesn’t mean it’s good code. But sadly, because LLMs write out everything with confidence, lots of people will believe that it’s the correct solution / answer.

7 Likes

Agreed. LLMs are very good at hacking your brain, because they are so good at generating intelligible text. Human beings used to use all these strategies to influence people too, but LLMs are dangerous because they don’t understand what they’re doing. It’s automated conning.

6 Likes

@Ethos I think you would appreciate reading some of those other threads I linked above, because I said most of what you said above in the past.

You actually make some very good counterarguments to what I posted about the sewing machines and I concede your points about the luddites. Like I tried to communicate in this thread, I’m still working my thoughts out on it.

I agree that LLMs suck as a programming tool. It cannot architect. It creates too many bugs and too many security holes. I tested the most recent tuned models for programming just a few months ago installed locally and they were AEFUL. And it cannot really understand the larger function of a software program/app/tool. I don’t even think it’s a good research tool.

I do think there’s a difference between AI slop and things crafted using AI as a tool however. Adobe lightroom has been using generative AI as a plugin for a couple years now to learn from your own picture editing and edit pictures like you, so that photographers can spend less time editing their photographs and take on more clients so they can make more money. Song writers talk about how they can upload a song they’ve sang and recorded with a guitar and use generative AI to make a backing track so that they can show how it might sound with a band without having to add all that in because they want to sell the song and let someone else focus on producing the real thing.

There’s a balance and I think we are still finding it with entertainment like movies, music, games, etc. There is a lot of AI slop out there, but it’s like any new toy. Over time I think the AI slop will fade for various reasons.

3 Likes

All of the things you are describing are “people doing less art in order to make more money.” Surely you can understand how this is a poisoned well of an ideology? A sewing machine helps you make art. A computer helps you make art. If a neural network can only help you make money, why are you using it for art? Do you make songs so that you can make more songs, or do you make songs so that you can make good songs? “TikTok music” is a modern pejorative because the songwriters only care about making the first 8 seconds sound catchy, for example. It makes them money. It also makes crap music. This is slop. A songwriter focused on “selling the song” so “someone else [can produce] ‘the real thing’” is making slop. This is not likely to be a popular argument on a FOSS indie game engine forum. The Godot devs aren’t doing it for money.

Obviously, photographers are not super likely to care about whether or not the 500th wedding photo they produce is Art. But should their customers? Would you rather have your wedding shot by someone who cared about the craft, or someone interested in ticking your box so they can move on to the next customer? These are things people consider when they pay artists. Does using “AI” actually put you ahead in that regard? Does Adobe’s generative plugin actually edit pictures the way you do, or just sort of like how you might? If it does, why does anyone need you anymore? Anyone can download a plugin to edit their photos “like a pro.” I wouldn’t consider those people very smart, so why would I consider the pro conned into editing their photos with the same software very smart?

Statistical models can be used for many legitimate purposes. Long before “generative AI,” weather forecasting was using them because they far outclass human predictive capacity. That’s an example of a technology which is better than humans at something, which shouldn’t really rely on humans – again, “generative AI” isn’t. As has been said by many people before me, we were supposed to use automation to do all the tedious, repetitive work for us so we could focus on creative pursuits, not to do all the creative pursuits for us so we could focus on tedious, repetitive work (like validating the accuracy of its output).

“AI,” in programming and broadly, is currently a buzzword for technology that people use to avoid thinking, learning, and growing, and to mass-produce content in the hopes of tricking someone into giving you money for it. This is not healthy for our society, and it’s fundamentally detrimental to you as a user, even before we consider its other downsides. Don’t rely on crutches. Your brain was honed for this over millennia. Use it.

4 Likes

I think the luddites are relevant here to show how people are willing to actively oppose the progress when their individual interests are threatened, and to show the counterproductive and futile nature of that. Curiously, it is also relevant that the workers weren’t barred from being employed. They sabotaged the machines to escape having to learn new skills.

Of coure there existed the conflict between the interests of the companies and the workers, and it is sad that the people were getting paid less or laid off as a reslut. The true reasons however run deeper that the greed of the companies - the humanity wanted more clothes than those workers could make by hand. The humanity needed those machines.

There is in fact a more recent example of the same thing - when the compilers were first introduced people spoke agains their use, stating that they waste computing time and produce an inferior code to what a human could craft by hand (If I am to guess, there was also a bit of bitternes in that to the thought of being replaced by a relatively simple program). Those people weren’t wrong, but we know now well they have misjudged the significance of those things. Today, we are willingly and easily trade the quality of code for the ease of creating it, because this objectively enables us to get far more complex programs much quicker.

There is “casualization” of the industry going on ever since the introduction of personal computers. It is driven by the humanitys want for more software, that is not yet satisfied. Remember, when you speak against the change, that in someones eyes, using C++ is cheating and is not the real programming. Don’t fall into delusion that something is the “right way” simply because that is what you happened to be taught in your time. Going ahead things will become very different.

And you know that - how?
I think we can be certain that the human-level intelligence is possible - because it exists. It is very likely that the AI can be made significantly smarter than an average human. And on top of that is the possibility of superintelligence - a tempting prize, even if uncertain.
The piblic AIs will of course remain the simpler verions of what the companies have in their labs - but the times when the human was the exclusive provider of the intellectual labor is well behind us and they are not coming back.

1 Like