Study Finds Learning via ChatGPT Leads to Shallower Knowledge

I came across this article this morning, which was published yesterday. Learning via ChatGPT leads to shallower knowledge than using Google search, study finds It takes a look at 7 studies, each with over 10,000 participants. They compared people researching a topic using ChatGPT vs people using Google searches. Then they had those people write advice on those topics based on what they learned. Then other people (who did not know what tool had been used) read the advice and decided whether it was helpful to them. (The topics of advice were things like gardening.)

Every few weeks we see a post on here with someone asking or talking about whether AI LLMs like ChatGPT can be used to help someone learn Godot specifically, and programming in general. This seems to be further evidence that while you can, you won’t know it as well or be able to take it as far as fast.

15 Likes

Important to note here that this is specifically about using these tools to tell you about things, asking them to do the thing for you will probably be worse as you wouldn’t really learn at all when someone else, or something else, does the task for you, but unlike with a human teacher, or a tutorial, the machine is far more likely to be wrong, and you generally lack the ability to correct that (compared with adding a comment on a tutorial, or asking your teacher to clarify)

11 Likes

Perfect timing.
I just asked ChatGPT how to achieve something like Unitys AnimationOverrideController in Godot:

It just works.

Edit:
ChatGPT has been useful explaining me certain gamedev/programming related concepts, like dependency injection for example. It often saves time when the docs aren’t very clear about things or you want to know something specific. And sometimes to fix logic when calculating things.
But creating systems never turned out to be very useful.

2 Likes

So the question then is can you now explain dependency injection to others?

5 Likes

Great thing.When I was In a game jam, I thought I should try git for the first time to store my game files in a good secure spot.so I installed git and then… I have no idea how to use it.Soo I went to chatgpt and it helped me doing that like it’s very easy.

Now a few weeks later I have no idea how to use that same program I used 6 times during that jam…That’s a real example guys.

And don’t you worry I now know how to use git.

3 Likes

The study highlights a key limitation of relying on ChatGPT for learning: it tends to produce shallower understanding compared to traditional Google searches. This likely stems from ChatGPT providing concise, synthesized answers rather than encouraging deeper exploration or critical thinking. While it’s a useful tool for quick insights, real mastery—especially in complex fields like programming or Godot development—still demands active research, hands-on practice, and engaging with diverse sources. Using ChatGPT as a supplement rather than a primary learning method is probably the best approach.

1 Like

To add to what others have said here I’d say that this is not all that “new” in the sense that reading summaries and similar will always yield shallower knowledge, having knowledge “handed” to you, when you’re not challenged to understand. There’s also the fact that there’s no guarantee that you do understand the deeper implications from the shallow description, or that you’re challenged to think more deeply about it or that it sparks interest to go read more

What I think these services have done is that they’ve provided two things:

  • The ability to get these for pretty much any topic you can think of, where before you’d have to be lucky and find an article or online summary etc., or find someone to ask and summarize for you (or ask in a forum etc.)
  • It being available instantly, on demand, making some people entirely stop doing actual research or seek out knowledge on topics, trusting the summary they find and not going deeper

It feels very much into the instant gratification and fast, simple, digestible answers culture that we have with modern social media and similar

Ironically people warned the same would happen with the internet itself, making people not pick up books and actually learning, but unlike that mindset that ignores that the internet actually contains a lot of useful information, this one does risk harming people’s genuine curiosity and especially ability to find information themselves (which, to be fair, has been lacking for a long time, partially due to schools not teaching how to look things up in libraries or the internet)

5 Likes

That’s a great question! :sweat_smile:

1 Like

I think our brains are inherently lazy and will gladly optimize out of doing the hard work of learning at any opportunity – it’s literally about conserving sugar. So, yeah, be careful out there, chooms. ( =

My favorite use for LLMs is to find the lingo of some new branch of technology; that way I can do meaningful searches and have a ghost of a chance of learning something for myself. It’s like that old joke:

A monad is just a monoid in the category of endofunctors, what’s the problem?

– via https://stackoverflow.com/questions/3870088/a-monad-is-just-a-monoid-in-the-category-of-endofunctors-whats-the-problem

When you’re first learning about something, you don’t have the vocabulary to construct a meaningful question. Lots of documentation is written with the intermediate user in mind (users tend to become intermediate and stay there, so that’s not a bad decision); however, beginners need something else. And, y’know, it can be embarrassing to make mistakes in front of a bunch of Internet strangers. Enter the LLM: a private chat session with something that can get you started and give you some answers. I think the important step in the process is to then go dig for answers yourself, repattern your brain with the knowledge and the ability to find more.

2 Likes

Well that was a rabbit hole I wasn’t expecting today.

That can be a useful use - i.e. to improve your Google-fu. But keep in mind that it can and will make up terms, functions and approaches for Godot in particular that have no basis in reality. Which is why I consistently warn people new to Godot and programming to avoid them.

2 Likes

Yeah, that’s super pernicious. I tried out opencode on a refactor in one of my game projects, literally changing damage from an int to a more structured RefCounted (more complicated kinds of damage with resistances, etc) and it handled that pretty well. But I made the decision as a beginner Godot-er to not use AI in my projects to make sure that I learn what I’m doing and dig into those docs.

2 Likes

Would I know more if I did traditional Google searches and read documentation and postings loosely related to a problem and then solved the problem? Almost by definition I would have to know more to do that, but it would probably take a lot of time. When using AI, I can get things done quickly without learning them. If learning is my goal, then I slow down and ask the AI to explain the code. This is usually at least a little bit necessary anyway to get something complex working, as the AI code is rarely 100% on target. In the end, it’s a tool and you can get from it what you want to get.

Studies they have done with people who researched via Google vs LLMs show that the people using Google retained more information, and were able to provide summaries of what they learned - whereas those using LLMs did not retain enough information even to write a useful summary of the information they learned.

So, yes you would know more if you did traditional Google searches. The result of having to dig for information and comprehend what you don’t know and fill the gaps would reinforce your neural pathways. (Repetition increases retention of information by 20-30%.)

This is true.

That is the seductive promise of LLMs, but it is provably not true. As it does not provide accurate or consistent results.

2 Likes

It’s also important to be aware of that most sources of information you’d use would come with their own ways of verifying the quality of the information, be they the source (is it a page by a reliable source?), or the feedback on the information (i.e. the replies and reactions on an online forum)

The AI summary feature offers no indication of how reliable the information is, and especially doesn’t provide any way for others to share or clarify or expand

So even if we’re talking about finding quick and dirty answers, as opposed to getting real knowledge, the “traditional” ways are far superior, with the exception of some scams you won’t get a forum post reply suggesting something completely incorrect, like a command deleting your entire hard drive, without it getting responses, or even being hidden as malicious, but with the AI summary it can easily just tell you to do rm -rf /* (just to be 110% safe, please do not run this command, can never be too sure about people reading what is written) and an inexperienced user would not have any clue if that’s valid or not

Additionally, Google’s assist feature has been shown to provide scam information in some cases, for example people searching for products or businesses being directed to fake websites or contact details for an imposter page or a phishing scam etc.

1 Like

This is kind of the inverse of what I already said, but people using LLMs are jumping straight to the answer. It’s not a surprise that they’re not learning as much. Learning usually isn’t the point unless you intentionally use the LLM to learn (asking follow up questions and such). People using Google to get a quick answer aren’t trying to learn either, it’s just happening accidentally.

I’d say that’s not true, a lot of people do it to get actual answers, not solving specific problems like how to connect their phone to the computer or something

To be clear “learn” here isn’t like “study” it’s just to get facts, understand topics, get answers, as opposed to solving individual issues

1 Like

yeah i saw that on a video called “chatgpt makes you dumber” or something like that

So this is different but related to the earlier study (which was small, and don’t know if it’s been replicated yet) that showed that people that use LLMs to write for them tend to become worse at writing, their writing becomes more simplistic and derivative and more repetative

And it all falls into the same issue: All skills need regular practice to keep up your level at it, if you stop practicing a skill like writing, or specific knowledge, it fades, you won’t lose it completely (usually) but you won’t be as good at it if you don’t practice it regularly

The same goes with knowledge, and add to that the fact that learning is a skill too

So while using AI might not make you less intelligent, it does replace actually doing things yourself and just like driving everywhere doesn’t necessarily hurt your health, walking places if you are able, especially short distances that you don’t need to drive, helps you keep healthy (unless you’re disabled or have other reasons why doing so actually hurts your health instead)

We also have studies showing that keeping your mind active and challenged helps reduce the risk of age related issues like dementia, so there’s a risk that a reliance on AI will make people at higher risk of that in the future. And while it can’t cure depression keeping your brain active and being creative can be a way to help dealing with mental health issues, and losing creative and expressive things to do can absolutely be bad for your mental health

2 Likes