I am kind of surprised to learn that, to this day, not everybody is using AI as an aid in scripting, and some even seem to be against it. To me, it has become an important tool that I use almost daily, but it’s interesting to hear other opinions. There doesn’t appear to be a topic yet, so I suggest we exchange opinions here.
So, have you tried using AI for scripting? If not, why? Do you think it’s good for those just learning? For actual work? What things you find it does well, and what it fails at?
Stealing code that you don’t know the source and license of is very much an issue, yes. Even before AI was widespread, companies have sued for this. And won.
I meant, are there actual people who build working projects out of generated code, to face the licensing issue, or is this a hypothetical like “if they were to that would be problematic”?
I feel like AI isn’t quite there yet to build you a project, or even a system. That is what I mean by using it as an aid.
I wouldn’t call copyright infringement a hypothetical issue. There are many documented cases of this even back in the early days of computing. I’m not sure why getting around copyright law would be a “non-issue”.
Okay, fair point. It seem to be the consensus so far that using generated code directly is bad - and for more than one reason. To me however, this doesn’t sound like an AI-specific issue, as the same holds for copying from any source, including a tutorial or a forum or google. This does not prevent us from consulting or learning by them, I don’t believe using AI for same purpose is any different.
AI doesn’t save me any time compared to just searching for an answer. In my experience it actually costs me more time, since I need to evaluate the code to make sure it is both correct and something that was not made up because the AI system doesn’t want to say “I don’t know”.
When I search for myself I get context. Forums have a voting system that lets me know what other people think of the answer I am looking at. The date of the post can tell me a lot about the version of the code the answer is for. I find all of that important - and I don’t get that info from a LLM.
I have no reason to work with a system that doesn’t work as well as the tools that are already available.
As for learning, my personal opinion is that AI hinders more than it helps. Spoon fed answers do not promote understanding of the underlying concepts. Knowing how to solve a problem is often as important as knowing the answer.
I worked for an AI company back when the technology was first starting to gain commercial traction. I definitely think the technology has its place. I find this article to be a great example of what AI does really well - which it can do when it is focused on a single task.
Unfortunately, it is what AI does poorly at that is sold to the public. Using code as an example, if I am trying to solve a problem the majority of the answers an AI gives will contain an error of some sort. That is because, under the hood, the LLM is guessing what the answer should be based on what it was trained on. There is no context, no understanding, no real way for the machine to tell right from wrong. Which is why an AI will often give different answers when asked the same question.
I do not trust people who give different answers every time the same question is asked. I see no reason why I should trust a machine that does the same.
Actually this isn’t really the same, as many (non-AI) sources will have explicit licenses. For example content posted on StackOverflow is Creative Commons (which was probably the main source of code snippets and help pre AI)
I am sorry, but how does it take more time? You do have to evaluate the code from other sources too - and that is on top of finding it. Like, yes, what you say is true for AI - but so is for any other source, I feel.
You could say I use AI chat as a sort of search engine, except it formulates an answer to a specific question asked, instead of returning a bunch of links for you to go through yourself. It isn’t perfect, but it is surprisingly decent and useful. Perhaps, it depends on how you phrase the prompt…
And it’s not like you have to abandon Google once you touched AI chat. It is one more tool on my belt.
In my opinion, copyright is not an issue. Even assuming your game attracts someone’s attention enough that they actually unscramble and dig into the code, it will not likely be infringing.
By their nature, LLMs do not generally regurgitate any one source they were trained on. You can make them do it as a research project, but most of their answers are going to be a mash of several sources that looked vaguely like what you’re asking for. That’s why they don’t always work.
If you have a moral issue with using LLMs, that’s your business, but you’re throwing away a tool that other people are going to use. And the more common that use becomes, the more the law will allow it.
I use my local LLMs constantly for everything from code to general questions I used to submit to a search engine. I get a lot of wrong answers, but I get more right answers, and that saves me time overall.
For reference, I’m a sexagenarian, retired programmer. I started learning to code when C was shiny and new, before the PC was a thing. During my career I noticed that people depended more and more on complex libraries rather than generating their own code. Nothing wrong with that, but it makes me wonder if using LLMs is really all that much of a sea change.
Tools are tools. You learn how to use them to get a job done. New programmers will still have to learn how to deal with situations the tools don’t cover, so they will learn exactly that.
If you rely on AI to write your game code, eventually you will want to change something, and the way AI usually writes code is the typical “it works” approach, but it’s not at all clean. Things will depend too heavily on other things, everything will be a massive spaghetti, AI makes a lot of bad decisions, such as questionable design choices, hidden coupling, edge cases, leaky abstractions etc…
It has no concept of proper context or “thinking”. Apple recently released a paper outlining a lot of issues with current LLMs.
@gentlemanhal already linked to one of the threads where we hashed this out recently. I made most of my points at length there. So bullet points:
LLMs are best used as tools (as @duane and @that_duck mentioned) when used to give you information about subjects you already have knowledge about. If you cannot fact check an LLM because you’re not a subject matter expert, you’re going to waste time and be forked by it.
LLMs lie. They make shit up that sounds good. We all know this.
Yes, you can make full games using vibe coding with Claude AI now. They suck, but they’re really still fricking cool. The code is a godawful mess that no professional developer would want tot ouch with a ten foot pole. (I’m looking forward to charging large companies outrageous amounts of money to fix AI code in a decade or so when their pipeline of trained developers is broken.)
I’ve been using AI for years. I learned some PyTorch before ChatGPT came out. A week after it was released we were evaluating it for work because as developers we think this kind of stuff is neat. It does certain things well. It does not do programming in less well-known languages very well.
I guess what most people fail to realize is that auto complete, indents, grammar, spell-checking, and more common “features” are all ai powered. So before people start casting stones at glass houses, they should consider how they are also using AI.
The uncomfortable truth is unless you are coding it all, correcting your errors, you are using AI to assist your own projects.