So one of the people I’m doing game programming work for is using an LLM to add in new features while I code the basics. He does a pull request and I look over what he’s done. This time, he decided to run his plans through the LLM, and have me critique them, then run my critique through the LLM. And so all day yesterday we went back and forth. His goal was just to make sure that the path the LLM led him down was ok with me so that his work would be worth his time.
It has been an interesting experience. Because while he’s using the LLM to learn, he’s also watching videos. Also, he’s solving very specific systems, and breaking them down into pieces (because I’m breaking the pieces down for him in meetings). So while the code hasn’t been perfect, taking something that is crap code, but works, is better than coding it all from scratch.
So yesterday, the LLM is giving him advice on “how to talk to me”. He’s just copying and pasting the output to me. I didn’t know my first response was going into the LLM and it said something about not using Strings. So I had to lay out my architecture a little more clearly. Which at first was frustrating, but then I realized because of how my client was using it - I was getting to do a code preview before the code was written.
The LLM kept “pointing out” potential problems. Then I would explain why they weren’t problems and were factored into larger architecture. Then it started trying to future proof my code. It came up with all sorts of “common problems” that did not impact us. Then it went farther afield and started coming up with potential problems if we implemented things we’d never discussed implementing.
This was because he had asked it to be critical and try to find any issues we might encounter. When it ran out of legit issues, unlike a human, it didn’t say “looks good”, it kept doing what it was asked - looking for problems.
This is a great example of why LLMs aren’t thinking machines and why they won’t take over the world like in Terminator. An LLM doesn’t know when to stop. It’s always going to give you what you ask it, and nothing more. If anything, LLMs are a lot more like the robots that eat the world in Horizon Zero Dawn. (Which if you haven’t played it is an example of a game with fantastic mechanics balanced well with story and unique leveling.)
Ultimately, it was like talking to a really annoying junior program who has lots of book knowledge, but no practical knowledge, and wants to keep talking instead of actually doing the work.
