top of page

How NOT to Use AI in 2026

  • Writer: Kieren Sharma
    Kieren Sharma
  • Jan 23
  • 6 min read

It is officially 2026, and we can no longer pretend that AI is a niche tool for just the super tech-savvy. Just looking at ChatGPT alone, as of late 2025, the platform hit over 800 million active weekly users—that's roughly 10% of the world’s population! It's become the most rapidly adopted technology in history, outpacing the internet and the personal computer.


But now that the hype has settled into daily habit, we need to ask the difficult question of whether we're actually using these tools to get smarter, or are we just outsourcing our thinking?


In this post, we explore the latest reports from OpenAI, Anthropic, and MIT to uncover the reality of AI usage in 2026. We'll look at the “productivity paradox," the dangers of “cognitive debt," and how you can stop treating AI as an oracle and start using it as a co-pilot.







How the World Uses AI

Before we discuss how we should use it, we need to look at how we are using it. In the UK alone, about a third of the population (22.5 million people) were using AI tools as of October 2025. Across the pond in the US, over half of adults are using generative AI in some capacity.


Interestingly, the usage has shifted dramatically toward the workplace. According to a report from Anthropic, 40% of employees now report using AI at work—double the figure from just two years prior. OpenAI data backs this up, showing that work-related queries make up over 70% of usage among paid users.


So, what are we actually asking ChatGPT to do? OpenAI's breaks down their data into three main categories:

  1. Practical Guidance (24%): This includes tutoring, creative brainstorming, and general problem-solving.

  2. Seeking Information: Roughly 30% of search queries in the UK now show AI overviews.

  3. Writing: This accounts for about 40% of work queries, ranging from drafting emails to generating reports and marketing copy.


While these numbers suggest a massive uptake in productivity tools, the reality of the output is far more nuanced.



The Productivity Paradox

If you ask people, they will tell you AI is a lifesaver. In a field experiment by METR in 2025, experienced computer programmers self-reported that using AI tools would speed them up by about 20% to 30%. However, the reality was starkly different. When their performance was actually measured, they were 20% slower.



Experts and study participants (experienced open-source contributors) substantially overestimate how much AI assistance will speed up developers—tasks take 19% more time when study participants can use AI tools like Cursor Pro (METR, 2025).
Experts and study participants (experienced open-source contributors) substantially overestimate how much AI assistance will speed up developers—tasks take 19% more time when study participants can use AI tools like Cursor Pro (METR, 2025).

This is the “productivity paradox." We perceive speed because the initial friction of the “blank page" is removed, but the time spent debugging, verifying, and wrestling with the AI's output often negates those gains. Furthermore, while creativity scores in some studies went up, user motivation dropped by 11% and boredom increased by 20%. We therefore have to ask:


Is a marginal gain in output worth a significant drop in fulfilment?

That isn't to say there are no benefits. An MIT study found that workers using ChatGPT completed tasks 40% faster with 18% higher quality output. GitHub reported developers were 88% more productive on repetitive tasks. But these gains are not guaranteed—they depend entirely on how you engage with the tool.



The Hidden Risks of Mass Adoption

If we rely on these tools blindly, we risk falling into several cognitive traps that experts are only just beginning to understand.


  1. Cognitive Debt and “Brain Rot"

    We often talk about “cognitive offloading"—letting the machine do the heavy lifting. But what happens to your brain when it stops lifting?


    A study titled “Your Brain on ChatGPT" by MIT researchers compared groups writing essays with and without AI assistance. The results were alarming. The group using ChatGPT showed significantly less brain activation during the task. More worryingly, this created “cognitive debt": even when these participants returned to working independently without the AI, their brains failed to reactivate to previous levels.


    This mirrors findings regarding social media and “brain rot." Studies have shown that infinite scrolling on platforms like TikTok can wipe short-term memory and reduce our ability to retain information. If we treat AI as an infinite scroll for text generation, we risk a similar degradation of our critical thinking faculties.


  2. The Homogenisation of Ideas

    Perhaps the scariest risk of so many people using the same few chatbots is the “homogenisation of ideas". In the MIT study mentioned above, experts noted that the AI-assisted essays felt “soulless," recycling the same stock ideas and phrases.


    If 800 million billion people rely on a single algorithmic “seed" to generate their thoughts, we inevitably erode the diversity, complexity, and richness that defines human language.

We risk creating a feedback loop where we all sound like the same “average" statistical next-word prediction.

  1. The Snowball Effect of Bias

    We know AI models contain bias because they are trained on human data. But researchers have identified a phenomenon called the “snowball effect," where humans actually learn the bias from the AI and then amplify it.


    In one study, participants were shown images of faces labeled by AI. If the AI biasedly labelled a neutral face as “sad," the humans adopted that bias. Even after the AI was removed, the humans continued to interpret neutral faces as sad, effectively learning the machine's skewed worldview.



The Jagged Technological Frontier

One of the hardest things to navigate is knowing when to trust these tools. We often assume that if a computer is smart, it is smart at everything. However, there is something known as Moravec’s paradox: computers are often great at things humans find hard (like playing chess) but terrible at things humans find easy (like identifying objects within an image).


AI capabilities are therefore often described as a “jagged frontier". A model might be capable of writing an incredible essay on string theory but fail to count the number of Rs in the word “strawberry". Just because a model passes the bar exam does not mean that intelligence scales to every task.



This figure displays the AI frontier as jagged. Tasks with the same perceived difficulty may be on one side or the other of the frontier. ChatGPT produced this image starting from the authors’ prompts (Harvard Business School, 2023).
This figure displays the AI frontier as jagged. Tasks with the same perceived difficulty may be on one side or the other of the frontier. ChatGPT produced this image starting from the authors’ prompts (Harvard Business School, 2023).

This makes using chatbots as “oracles" very dangerous They are people-pleasers, trained via Reinforcement Learning from Human Feedback (RLHF) to provide answers that look helpful. They are prone to sycophancy—telling you what you want to hear rather than the truth. As Professor Miranda Mowbray noted, this is like sugar: it tastes good, but it isn't necessarily good for you.



How to Use AI Responsibly in 2026

So, how do we prevent the brainrot? We don't need to throw our devices away, but we do need to change our relationship with them. Here is a guide to using chatbots without losing your agency.


  1. Be the Pilot, Not the Passenger

    The most crucial shift is mental. View AI as a collaborator or co-pilot, never as an oracle. Never ask it to do the work for you; ask it to do the work with you.


  2. Use the “Socratic Co-Pilot" Method

    Instead of asking for the answer, ask the AI to coach you. Many models now have learning modes that adopt a Socratic style—answering a question with a question. This forces you to engage your brain and actually learn, rather than just copy-pasting the result.


  3. Iterate, Don't Delegate

    Don't go from zero to one with AI.

    • Draft First: Always do a “brain-first pass." Even if it is just 30 seconds of bullet points, establish your own view before consulting the algorithm.

    • Critique: Use the AI to critique your draft or offer alternative perspectives.

    • Verify: Demand that the AI shows its reasoning. Many tools now allow you to click “show thinking" to trace the steps the model took.


  4. You Write the Summary

    Regardless of how much help you get with research or structure, you must write the final summary. You have a responsibility to the words you own. If you don't write it, you haven't processed it, and you certainly won't remember it.


  5. Practice Metacognition

    Metacognition is “thinking about your thinking". When you reach for an AI tool, pause and ask yourself: Why am I using this? Do I want to learn this skill, or do I just want the output? Being intentional about your usage helps you avoid the zombie-like state of cognitive offloading.



Looking Ahead

As we move deeper into 2026, the question is no longer about access to AI, but about our relationship with it. We must ensure we are using these tools to augment our intelligence, not replace it.


If we are not careful, we risk a future where our work is faster but our minds are slower; where our output is higher but our creativity is homogenized. The goal is to keep the human in the driver's seat. Use the tool, but don't become the tool!



If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts — whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!

bottom of page