@gentlemanhal already linked to one of the threads where we hashed this out recently. I made most of my points at length there. So bullet points:
- LLMs are best used as tools (as @duane and @that_duck mentioned) when used to give you information about subjects you already have knowledge about. If you cannot fact check an LLM because you’re not a subject matter expert, you’re going to waste time and be forked by it.
- LLMs lie. They make shit up that sounds good. We all know this.
- Yes, you can make full games using vibe coding with Claude AI now. They suck, but they’re really still fricking cool. The code is a godawful mess that no professional developer would want tot ouch with a ten foot pole. (I’m looking forward to charging large companies outrageous amounts of money to fix AI code in a decade or so when their pipeline of trained developers is broken.)
- A recent study showed that experienced developers took 20% longer to code using LLMs because they created 10 times the number of security holes and they had to fix bugs. It’s like asking a junior developer to do your work and then spending more time checking it than it would have to do it yourself.
- I’ve been using AI for years. I learned some PyTorch before ChatGPT came out. A week after it was released we were evaluating it for work because as developers we think this kind of stuff is neat. It does certain things well. It does not do programming in less well-known languages very well.
Some other recent discussions on this topic here: