top of page

AI Rights and Consciousness

  • Writer: Kieren Sharma
    Kieren Sharma
  • Dec 21, 2025
  • 5 min read

For decades, science fiction has trained us to fear the moment the lights turn on inside the machine. From HAL 9000 in 2001: A Space Odyssey to modern depictions of AGI, we worry about the moment computers become conscious entities that can suffer, plan, and potentially attack us.


But recently, this hasn't felt so fictional. In 2022, a Google engineer claimed a chatbot was sentient. Anthropic recently hired an “AI Welfare Researcher," and they estimate a 15% chance that chatbots are already conscious.


If AI is becoming more autonomous — acting in markets, executing contracts, and mimicking human emotion — do we need to start talking about AI Rights? In this episode, we sat down with Dr. Miranda Mowbray, a mathematician turned AI ethicist at the University of Bristol. She took us on a journey from cybersecurity to the legal rights of rivers, dismantling the “consciousness" hype to reveal the practical legal realities we actually need to worry about.







The Consciousness Trap

The biggest hurdle in discussing AI rights is our tendency to anthropomorphise. We are wired to treat things that sound human as human. Current AI models are trained for what Miranda describes as sycophancy". They are rewarded for agreeing with us and validating our prompts, much like an eager intern trying to please a boss. This can feel like compassion, but it is actually just statistical affirmation — sugar for the user's ego.


Because we can’t even prove fellow humans are conscious, basing laws on “machine consciousness" is a dangerous game. To prove why, Miranda applied standard philosophical tests for consciousness (autonomy, complexity, ability to plan) to a surprising candidate: Malware.


Examples she mentions include:

  • Passing “a version” of the Turing test: scams that convincingly mimic humans.

  • Planning and reasoning: trivial for software to mimic in narrow contexts.

  • Unpredictability: a random number generator will do.

  • Self-replication: malware already replicates constantly.

  • Complex networks: botnets can create massive interconnected systems.

  • Autonomy: The Conficker worm survived for over a decade without human help, jumping from machine to machine.


If we grant rights based on these criteria, we would accidentally grant rights to malicious software designed to steal from us. As Miranda bluntly put it:

“We do not want to give rights to malware!"

Rights for Rivers and Stone Statues


Shiva Lingam, Pashupatinath Temple, Kathmandu, Nepal
Shiva Lingam, Pashupatinath Temple, Kathmandu, Nepal

If consciousness is a bad metric, should we dismiss AI rights entirely? Not necessarily. Miranda pointed out that the legal system grants rights to non-living things all the time for “legal convenience".


  • Corporations: They are “legal persons" so they can sue, be sued, and own property.

  • Rivers: The Magpie River in Canada has rights to flow and not be polluted, a mix of Indigenous belief and environmental protection.

  • Religious Icons: Miranda shared a fascinating example of a lingam (a statue representing the Hindu god Shiva) that holds legal rights to manage temple finances.


In these cases, “rights" aren't about the inner feelings of the river or the statue. They are legal tools used to protect the humans and communities who rely on them.



So what would “AI rights” actually do?

This is where the conversation becomes practically useful. As AI systems become more autonomous — acting in markets, executing contracts, and mimicking human emotion — you can see why some people start reaching for the corporate analogy: treat AI like a legal actor to close the accountability gap. But Miranda was very cautious about a key failure mode: Rights can become a way for companies to offload responsibility.


The “right to clear instructions” (and why she dislikes it)

Kieren and Riku float an idea inspired by the classic “paperclip maximiser” thought experiment: maybe an AI should have a “right to clear instructions”, so it can’t do absurdly harmful things due to vague prompting.


Miranda dislikes this framing because it shifts the burden onto users:


  • “You prompted it wrong”

  • “You didn’t specify every edge case”

  • “Not our fault!”


That approach weakens incentives for companies to build safer products.


Rights that protect humans, not models

She offers a more compelling example from scholar Kate Darling: pet robots or chatbots might deserve protections against abuse because humans form emotional attachments — and harming the robot harms the person. This is a recurring theme:


Sometimes “rights for X” are really rights for people, implemented through X.


The Mathematics of Fairness

Finally, we touched on the “Right to be Unbiased." This sounds great in theory, but Miranda explained why it is mathematically impossible to perfect. Using the famous COMPAS case (an algorithm used for bail decisions in the US), she explained that there are different mathematical definitions of fairness.


  1. Equal False Positive Rates: Predicting people are high-risk when they aren't.

  2. Predictive Parity (Precision): If the system says you are high risk, how likely is it that you actually are?


In the COMPAS case, the system satisfied one definition but failed the other (showing bias against Black defendants). Mathematically, it is often impossible to satisfy both definitions at once.


This means we can't just program “fairness" into a machine. We have to make difficult social choices about which definition of fairness we value most — and that requires consulting the people affected by the system, such as victims and defendants.



What Can You Do?

It is easy to feel powerless against Big Tech, but Miranda emphasised that the public has power.


  • Join Organisations: Individuals have little power alone, but collective groups (consumer rights, digital rights activists) have significant influence.

  • Participate in Consultations: Governments frequently run public consultations on tech policy. They want to hear from citisens.

  • Vote with your Feet: Tech companies care about market share. If a product is unsafe or biased, consumer pressure forces change.


The Bottom Line

We shouldn't get distracted by sci-fi debates about whether a robot has a soul. The real questions are about legal convenience, corporate accountability, and mathematical fairness. We need regulation that is transparent, accountable, and targeted — not to protect the feelings of the machine, but to protect the rights of the humans using it.


Next Step: Interested in how you can actually influence AI policy? Look up open public consultations on technology in your country (like the UK Parliament or US Congress websites) this week. Your voice matters more than you think.



If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts — whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!

Comments


bottom of page