Nowadays, there are many good products for AI models that are open source and free, just like deepseek. But they often can’t answer Godot-related questions correctly when used on a day-to-day basis. Is there a large AI model specifically for Godot?
Thank you for your reply.
I want a big AI model specifically for GODOT games with a lot of training data for Godot, so it should perform better when answering Godot questions. Maybe the community could organize such a project, although the current AI is not necessarily reliable, even if it is well trained.
Most large models such as those from OpenAI or Claude do a really great job at answering questions about Godot, I don’t think it’s necessary to have something super specific in this case, especially since at a certain point, feeding it more data will gradually matter less and less, and most modern models already reached that point.
That being said, be careful with learning programming with AI.
AI is very good at stating incorrect or wrong information as facts.
Furthermore, an AI answer might look good at first glance, and might even work, but AI is often really bad at picking the correct pattern to use, to write reusable code, it might couple different parts of the code together that should not be coupled, there will be plenty of design tradeoffs and the likes.
So using AI might go really well in the beginning, but as your game grows, the technical debt AI will accumulate for you will grind your development to a halt.
It’s beyond expensive to properly train an AI hint: you need a bunch of human based pattern recognition paid by the piece to properly tag it all. Then, you need ungodly amounts of electricity (Bitcoin mining is cheap compared to that), and that’s why you won’t see a good “AI” for Godot for years unless someone finances it’s development.
Simple as that: money (and man hours)
AI, like Google, is good at answering questions if you know the right things to ask. The more you know about a topic, the better answer you’re going to get. Asking it opinions is going to get you an opinion, but in programming there are often multiple acceptable answers.
If you ask how to make a controller for a CharacterBody2D, you’re going to get an answer. But it may or may not be the answer you want. I actually asked ChapGPT how to move a character in 8 directions(when it first came out), and instead of giving me the one-line appropriate answer var input_dir := Input.get_vector("ui_left", "ui_right", "ui_up", "ui_down") it was using trigonometry to figure out how to move in 8 directions on a circle and gave me an 8-line answer that didn’t work even after multiple refinements. At the time, ChatGPT had just come out the week before, and Godot was in version 3.x. I was just learning Godot, and I didn’t know the right question to ask.
Also, having an AI trained in programming is going to have inherent bias. There are plenty of discussions on here of Inheritance vs Composition for example. Knowing the difference between the two and when to use each is more important than all your answers being provided in one form or the other.
What I hope is that the AI will have a better understanding of Godot’s features.The following AI answer is a typical misconception.(Godot does not support the use of AnimatedTexture as a mouse cursor and this class may be abolished)
But if I just ctrl+a, copy all the documents, and ask again, the answer will be much beter. (This is a way to learn a class quickly, but it needs to verify the AI’s answers.)
So I wanted to have an AI trained specifically for Godot. But as mentioned in the reply, it takes money, the effect may not be as good as imagined, and the most important thing is our own level.These replies are wonderful!
Over the years, I have definitely felt the progress of AI. Perhaps in the near future, we can also have better AI services!
Does anyone know if it’s even possible to use an AI assistant to analyze a whole project’s code and all its scene nodes? I’m thinking like, you could ask it questions in plain English such as “find the code/node that activates the specific door” or “code/node… that makes the character shoot a type of ammo.” Can current AIs actually “see” the entire codebase and tell you exactly where specific functionalities are?
AI is based on pattern recognition. Context is not included in that. That makes it horrible at answering specific questions correctly. Especially given it’s tendency to make stuff up.
Frankly, grep is a better choice for analyzing relationships in a code base. Under the hood all Godot files are plain text. You can search them without having to upload your code to an outside system and losing all rights to your work.
You can right-click any object in code, select Lookup Symbol and find out where it comes from.
You can right-click any item in FileSystem, select View Owners… and see where in your project it is being used.
You can press Ctrl + F and search for anything in the script you’re working on.
You can press Ctrl + Shift + F and search for anything in the entire project’s scripts.
LLM (Large Language Model) AI cannot do any of that. LLMs don’t think like us. They live in a world of probabilities. That’s why they “hallucinate”. To an LLM, if it’s probably true, it is true. They cannot make that distinction.
You could train an LLM to use all the features of Godot, and sooner or later get pretty good results out of it for what you want. But you’d have to be a pioneering AI trainer to do it.
Not to get too philosophical, but that’s exactly how living things evolve, behave, and think, yet some people still have zero credence for certain things. Anyway…
Turns out, there are only a handful of AI assistants commercially available that claim to analyze entire codebases and answer questions or make suggestions based on that. Honestly, if one just works without a hassle, that’s all I really care about.
My bad, I didn’t know that before posting here – I should have Googled it better. What I’d really like to see, though, is one that runs locally, for, well… obvious reasons.
A computer programmed to do math will always give you the same answer. An LLM will not, because it’s data includes jokes that 1 + 1 = 3. This is simplified, but that’s why LLMs get worse at math the more complicated it gets. That’s what I mean by they live in a world of probabilities.
Also, some of the latest scientific discoveries of the brain have some scientists positing that our brains are biological quantum computers running in multiple dimensions at once. LLMs aren’t going to be comparable until they’re running on quantum computers too. Then we might have an interesting philosophical discussion.
I’ve been working all day on a Godot project. 15 hours since my last reply. I’ve been googling stuff all day. Google’s AI has been summarizing wrong answers all day. Sometimes it tries in GDScript with functions that don’t exist, sometimes it uses C# features from other languages that don’t exist in C#. Sometimes, I admit, it is helpful to at least help me refine my search.
I’ve tried multiple LLMs both online and locally over the past 3 years now, some specifically tuned to development. The last time was about a month ago. You’re better off using a Linter to check your code. At least it has been specifically programmed to do what you want. Sadly, there are no Godot Linters I’m aware of.
Sure, they’ll get better. But there’s lots of wiggle room. How many levels deep will AI nest code before it decides to refactor into functions? Right now, if you look at the code that vibe-coding creates, you’ll see that it’s pretty bad. It works, but underneath it’s like watching a script kiddie hack something together. No art.