Saying ‘thank you’ isn’t done to be polite (usually), it’s done because it costs the company money. The LLM has to reply and expend resources and most of us don’t really care about it’s response at that point. Considering how shady some of them are with how they operate their ‘free trial’ models (see below) I’m not too hung about about doing it for that purpose, either.
To your prior comment, yea I agree. It won’t be long before they are just as bad as a normal modern search engine. Sometimes it IS just as bad. If I forget to put “free” before my request for tutorials I invariably will get a list of pay-for “tutorials” that aren’t any better than a YouTube video covering the same subject. Or, in the case of ChatGPT at least from I’ve seen, it pads its responses to reduce how much of the free service you get to use before it tells you you have to pay or use the “dumber” model. For what I do (again, glorified search engine), I don’t care, either model works. But I’ve noticed that the free ‘trial’ will absolutely bloat its responses to reach its response limit quicker unless I specifically ask for short concise responses up front. Greed is going to be the downfall of us, I think.
I also agree with dragonforge…it’s not a good tutor. it’s not a tutor at all. It does the work for you and then praises you when you type it back in. If there’s an error, it’ll instantly “fix” it for you (usually with a ton of bloat you don’t need) without explaining why it’s an error. When you ask why it’s wrong, it will pull a half dozen reasons without explaining each, and most of them will be wrong anyways.
A GOOD tutor will have you do the work, check the work, see the error, and then walk you through the problem without giving you the answer so you can come to it on your own. That way you are actually learning the information and are validated for reals when you arrive at the correct solution.
Nothing in that article shows it’s a legally sanctioned wedding. A company holding a mock wedding doesn’t mean it’s a legally binding wedding.
Wow… I’m not sure if I regret reading this thread or not.
Anyway, here’s my 2¢ :
I’ve found LLMs good for exploring new ideas. It essentially automates all of the searching I’d do to find the information related to whatever issue I’m working on. As to using it for code, well, it’s good for “view from above” examples, but I’ve never used any code produced by an LLM as is. There are always issues: using the wrong API version; using a non-existent APIs/function-calls; implementing patterns that “smell”; NOT UNDERSTANDING THE PROBLEM!
What the chat sessions are good for is getting a general sense of the problem (myself - the AI doesn’t truly “know”), and some general (if often incorrect) examples of how to solve it.
I’ve rewritten every bit of code that an AI has provided!
It seems the majority of the replies are not in favour of using AI for scripting. Let me preface by saying I’ve been using AI to learn and build for a while now, and while all mentioned flaws are legit, it’s not as bad as it’s being portrayed here in my experience. More on that later.
Because I want to start with another question, what or when are we comparing to? For example, in a few replies above, it is stated that LLMs lie (Or well, not intentionally, but let’s ignore semantics for a bit), but before LLMs we also had tutors or peers giving you wrong information, learning you the wrong things, wrong or faulty info on StackOverflow, etc. It’s not like we came from a perfect situation that now got worse due to the rise of LLM popularity. Information that sometimes turns out wrong is more accessible, and therefore you’d probably see more issues, yes.
But that leads me to my second point; I also have a feeling that people aren’t prompting correctly or setting the tools up to give the right results. In my experience (Using Claude), and setting up skills and agents (So that it validates it’s code against the docs, etc), it’s actually able to produce pretty good results. Yes, I still review generated code, an no, it’s not always perfect. But let’s be honest, neither was StackOverflow (Or other resource) before LLMs were a thing.
Again, I don’t want to downplay the aforementioned flaws, as the “You’re absolutely correct” is getting old really quickly, but I also want to shine some positive light on the use of LLMs, as when used correctly, they can significantly speed up your workflow and learning.
The problem with this is that people will completely ignore documentation and will eventually be unable to even read them.
Imo those who learn using AI will be completely stuck in place and will be unable to make any system on their own. It’s the same thing as tutorial hell, where people learn exclusively by watching tutorials and just blindlessly following them. Sure they might get some results, but tell them to make X without a tutorial and they completely fall apart. Look at some of the posts on this forum by beginners who wanted to do something slightly different that didn’t have a tutorial, and they don’t even know how to get started.
But “tutorial hell” is not the fault of the tutorials themselves, but of how some people use them. Just following blindly, not experimenting a bit along the way or even trying to understand why to do this or not do that. There are very few people who are passionately against tutorials despite of how each thing brought up here about gen AI can be said about tutorials as well. Most tutorials can be described as either outdated, created by an enthusiast with limited knowledge, way too narrow or way too broad, not in depth enough or too in depth etc etc.
I just find the whole thing a bit odd. If someone asks a bot to provide him with a level up system and then just copy pastes it into his game expecting it to be 10/10, the problem is not with the AI but with the user.
For example someone mentioned that most AI will not explain why something causes an error when prompted to fix it. The problem is easily solved by asking the bot to explain. Why should it be considered a problem that it doesn’t do something it was never asked to?
One of the thing that I think is important to consider is not just how it affects us individually now (i.e. “this is my experience with LLMs”), but how it affects our greater society. Here’s an article about how math skills in the U.S. have fallen back to 1970s levels. There are now remedial math courses for college students who enter college without middle school math skills and are in STEM (Science, Technology, Engineering, and Math) degree programs. Students have now had years to rely on Google and now AI LLMs for their answers. They don’t know how to solve the logic problem of getting their own answer without searching for it online.
I believe that this is a false equivalency. When you ask for information on Stack Overflow - or here on the Godot forums - you get a response from human beings who are actively trying to help you. If they provide wrong information, there are others to chime in and help, and you can get to the right solution with help from others. But they don’t usually do all the work for you - even if they supply code.
LLMs give you full working code which they purport to work. And it’s like watching them hand you a scrambled rubix cube and them telling you that no, in fact it is completely solved. Unless you challenge them. It’s literally like talking to a know-it-all with an ego who thinks they’re always right and knows nothing about the subject, but remembers really well what other people say on it so it can sound smart.
So back to the first point. Students in college now are already relying heavily on LLMs for their knowledge and it is measurably worse. As people rely on LLMs for programming more and more, their skills are going to get worse. Because they won’t know how to solve problems. So just because it works for anyone on this thread doesn’t mean it will work in general for everyone. It’s what’s known as anecdotal evidence. (“It worked for me!”)
Yes, LLMs can be utilized as a tool by people with problem-solving skills. They can also be utilized as tools by developers who know when to call it on its BS. But learning from scratch using an LLM, as @tibaverus mentioned, is a recipe for disaster. And I don’t think we are going to stop the trend.
What the experienced programmers on here having been trying to say (myself included), is that we don’t think using LLMs to help you code in Godot is going to help you in the long run if you don’t already know how to code. We could be wrong.
Part of a previous job I had was monitoring code quality. I caught developers cut-n-pasting directly from StackOverflow - the code being the snippet that had the problem!
The underlying problem wasn’t so much getting wrong information, but a developer not even having a clue that the information is wrong (when it was even stated as incorrect at the source). In this sense, LLMs make it worse. It’s dynamically generated (no base to search/compare to), “lies”, and also has no agency, and therefore no real accountability.
I’ve never seen that before, but I’ve seen stuff like it. It’s why I HATE coding tests in interviews. They don’t tell you anything other than people can Google.
I gotta say, I think that article is worrisome from a copyright standpoint. Because I think it’s a harbinger of the death of copyright. Because those cases aren’t faring well, and if you can only get any damages if you can prove they stole your stuff, it’s going to become harder and harder to prove. Then that’s going to crawl into images, video and music legal arguments. At that point, using AI will become an actual safer legal defense. It’s probably a decade or two off, but I see it going that way.
I just realized the culture, in general, could start to shame people for not understanding the obvious …people will have to privately go and ask the AI about anything they dont understand to maintain normality.
Help files could dissappear, manuals, documentation, everything can and might change!
Perhaps AI is one of the scourges and plagues caused by the industrial age: Greenhouse gases, AI, Teflon, Microplastics, Nuclear weapons, wildlife mass extinction.
Perhaps its slap back to the first world corporate suck machine, or another chapter in the next bible.
Yeah absolutely … its very sad when important books or novels go missing. For example engineering manuals for things like washing machines that the plumber is supposed to fix… it smacks of those 1970’s Bond villains, i am imagining one with a plan to replace and optionally subvert the primary information source. I mean, even wikipedia is curated with tracable edits, whose paying for that anyway.
I just came across this blog article today from GitHub’s official blog, and decided to share it with this thread, if that’s alright, as I find it to be relevant to our proposed use-case of LLMs for purposes of Software Engineering in the context of video game development, particularly with respect to Godot Engine:
In short, the article cautions us to use AI as a tool, not as a replacement, to still read and adhere to software documentation, and tweak final outputs given via generated code.