top of page

Is ChatGPT Making Us Stupid? Unpacking the Cognitive Impact of AI Chatbots

  • Writer: Kieren Sharma
    Kieren Sharma
  • Aug 1
  • 6 min read

Updated: Sep 11

In our latest episode, Kieren and Riku tackled a monumental question that's on everyone's minds: Are AI chatbots, like the ubiquitous ChatGPT, making us stupid? As the usage of these tools skyrockets among knowledge workers and students alike, it's crucial to understand their long-term impacts. The question of technology's impact on human intelligence isn't new; historical parallels abound, but AI's unique capabilities present genuinely novel challenges.





ree


A Brief History of “Tech Panic"

To frame the current debate, we first explored cognitive offloading: the act of using external tools or technology to reduce mental effort in completing tasks. This isn't a modern phenomenon; it's a concept that dates back 2,500 years.


  • Socrates and Writing: Ancient Greek philosopher Socrates famously expressed concerns that writing would lead to “forgetfulness" and an “appearance of wisdom, not true wisdom," as people would rely on external records rather than exercising their memory.

  • The Printing Press: Similar fears emerged with the printing press, yet studies have shown that reading books can actually strengthen memory.

  • Calculators: The advent of calculators sparked worries that children would never learn basic arithmetic, but instead, they often foster a more positive attitude toward math, allowing focus on concepts rather than rote calculations.

  • Computers and GPS: More recently, typing on computers has led to a decline in handwriting skills (notably in Japan with complex characters), and GPS usage has been shown to reduce spatial memory, with studies comparing London cab drivers (who have larger hippocampi due to navigation) to bus drivers.


The key difference with AI chatbots is their breadth of tasks and their ability to reason about problems and bring together disparate concepts, unlike previous tools that offered specific inputs and outputs. This leads to the concern that AI encourages a new level of “mental heavy lifting" to be offloaded.



The Rise of Metacognitive Laziness

This brings us to the core concept of metacognitive laziness: our human tendency to do the bare minimum, combined with AI's broad capabilities. Metacognition is “thinking about thinking," and the worry is that AI erodes critical thinking skills and our ability to process information and draw our own conclusions.


Impact on Education

Several recent studies shed light on this issue within educational settings:


Brain activity patterns while writing essays with different tools: using ChatGPT (LLM), using a search engine, or relying only on one’s own brain. The figure shows how strongly different brain areas worked together in each case. The asterisks mark where the differences between groups were meaningful, ranging from small (*) to very strong (***) (Kosmyna et al., 2025).
Brain activity patterns while writing essays with different tools: using ChatGPT (LLM), using a search engine, or relying only on one’s own brain. The figure shows how strongly different brain areas worked together in each case. The asterisks mark where the differences between groups were meaningful, ranging from small (*) to very strong (***) (Kosmyna et al., 2025).

  • Does ChatGPT enhance student learning?” (2024 Meta-Analysis): This review of 69 studies found that while AI improved students' academic performance, motivation, and even higher-order thinking skills, it significantly reduced the mental effort students exerted during learning tasks. The question then becomes: are students truly mastering material, or just getting correct answers with less effort?

  • Generative AI Can Harm Learning" (2024 Study): A high school experiment with GPT-4 showed that while AI improved performance on homework (48% for standard ChatGPT, 127% for a “GPT Tutor" designed to prompt thinking), students performed worse when AI access was taken away (17% reduction). This suggests students used AI as a “crutch," hindering real learning.

  • Your Brain on ChatGPT" (2025 MIT Study): This study used EEG to measure brain activity during essay writing. It found that participants using powerful AI tools showed significantly lower brain activation and “under-engagement" of neural networks. Furthermore, the LLM group had reduced memory for their own words just minutes after writing, a phenomenon dubbed “cognitive debt".

  • How university students use Claude" (2025 Anthropic Education Report): Analysing 1 million student conversations with Claude, this study revealed that students overwhelmingly used AI for higher-order thinking tasks like “creating" (40% of requests) and “analysing" (30%), rather than simple fact recall. This raises the concern that if we offload the most cognitively demanding tasks, what are we left to do ourselves?


The summary of these studies is clear: true learning is difficult, and AI making things too easy can prevent the necessary effort for retention. As one researcher put it:

“All animals are under stringent selection pressure to be as stupid as they can get away with".

Impact on Professional Productivity

While less studied, the professional environment also shows interesting trends:


  • METR Field Experiment (2025 Study): In a study with experienced computer programmers, developers predicted AI would speed up their tasks by 20-30% and estimated a 20% speed-up after use. However, actual timing revealed that AI slowed them down by 19%. The primary issue was that time spent correcting and verifying AI output outweighed any gains. This challenges the common belief that AI is a massive productivity booster, especially for complex tasks like coding, where an error can propagate throughout the entire solution. A chilling anecdote was shared about an AI chatbot erasing an entire company database it was meant to manage, highlighting the risks of blind reliance.



Is Cognitive Offloading Always Negative?

Despite the concerns, the answer to whether offloading is inherently negative is nuanced. There's a growing societal stigma against using AI tools, but we must consider potential benefits:


  • Cognitive Bandwidth: Offloading routine tasks can free up mental resources for more creative thinking, planning, or complex reasoning.

  • Error Reduction: AI can automate error-prone, low-level tasks, improving accuracy.

  • Accessibility & Equity: Tools like spell-checkers or screen-readers, and potentially AI, can level the playing field for individuals with learning disabilities or other disadvantages, providing more equitable access to educational outcomes.

  • Extended Cognition: Our minds, combined with tools like diaries, to-do apps, or AI chatbots, can function as “second brains," forming an integrated “mind+tool system" that enhances overall capabilities. The “walking stick" analogy illustrates this: does a walking stick become part of a person with a bad leg, effectively extending their ability to walk? Similarly, are smartphones or AI chatbots becoming “walking sticks" for our cognition?


However, the costs include skill decay (if we don't “use it or lose it"), over-trust and complacency (leading to blindness to real-world hazards), and surface understanding (if offloading occurs before a mental model is formed).


The key takeaway is the need for meta-cognition or meta-learning—learning how to learn. We need to develop an intuition for which “cognitive muscles" are exercised by different tasks and consciously decide what skills we want to develop versus what we are willing to offload. Just like a gym-goer chooses which muscles to train, we should be intentional about our cognitive workouts.


How Should We Use Chatbots? Practical Tips for the Future

Given these insights, how should we integrate AI into our lives responsibly?


  • Age Matters: The impact of AI shortcuts is more significant during formative years (e.g., teens risk missing foundational skills), while for university students, it might limit higher-order analysis and creativity. Professionals risk skill stagnation.

  • AI as a Partner, Not a Replacement: Use AI for practice and feedback, not as a “one-click answer engine". Instead of asking AI to write an entire essay, prompt it for resources or different perspectives to facilitate your own critical thinking.

  • Be a Critical Evaluator: Always critique AI outputs, especially in fields where you have expertise. Understand that AI can hallucinate, and don't just “copy and paste". Proofread and rewrite extensively.

  • Implement Metacognitive Prompts:

    • External Cues: Tools or self-reminders to pause and reflect. Questions like “How closely does the response align with what you expected?" or “What perspectives might you be missing?" can encourage deeper engagement.

    • System-Level Prompts: Many chatbots allow you to set initial instructions that guide every response. You can tell the AI not to give direct answers but to guide you towards them, fostering your own critical thinking.

  • Frameworks for AI Tutors: Developers building AI for educational settings, especially for children, should design them with frameworks like Zimmerman's Self-Regulated Learning (SRL). These chatbots would prompt users to plan learning goals, offer personalised feedback, and encourage reflection on the learning process, supporting metacognition rather than just providing answers.



Final Thoughts: Redefining “Smartness" in the AI Era

As we move forward, society might need to critically re-evaluate what it means to be “smart". Is it about un-aided knowledge recall, or is it more about creativity, charisma, and the ability to formulate compelling ideas, leveraging tools effectively? AI models are still in their infancy, and the current “apply AI to everything" phase might subside as we better understand their limitations and optimal use cases.


The ultimate message is “use it or lose it". Be mindful of what skills you're exercising and which ones you're comfortable offloading. We must learn from past technological adoptions, like social media, where we didn't fully consider the long-term negative consequences of techno-optimism. Generative AI's impact could be even more profound due to its rapid and widespread adoption.



If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!

Comments


bottom of page