top of page

The Most Important Conversation About AI

  • Writer: Kieren Sharma
    Kieren Sharma
  • Nov 5, 2025
  • 7 min read

Updated: Nov 9, 2025

AI is reshaping the way we work, live, and interact at a pace not seen since the deployment of the printing press six centuries ago. As AI algorithms become deeply embedded in daily life — from filtering your social feeds and navigating maps to determining loan approvals and hiring decisions — ensuring these systems align with core ethical principles is now a crucial societal priority.


This episode, features special guest Dr. Huw Day, researcher and co-organiser of Data Ethics Club, an open-to-all journal club on data and AI ethics. Together, we tackle the profound ethical challenges accompanying this technological leap. We explore why ethical AI is not about hindering technology, it’s about shaping AI development in a way that maximises benefits while minimising harm.


The urgency of this conversation is undeniable. As Riku notes in the episode, AI is unique because, unlike past technologies which only amplified human actions, AI is the only technology that actually takes power away from humans by making decisions instead of us. Furthermore, these complex tools are only as reliable as the data they are trained on and the people who build them. The seriousness of unquestioned development was underscored recently by a huge coalition of experts signing the Call to Ban Superintelligence, urging a halt on AI vastly smarter than humans until there is scientific consensus on safety and strong public buy-in.







Core Principles of Ethical AI

Ethical frameworks around the world, notably UNESCO’s global Recommendation on the Ethics of AI (endorsed by 193 member states), coalesce around several core principles aimed at protecting human rights and societal well-being.


Principle

Explanation & Importance

Fairness and Non-Discrimination

AI systems must treat individuals and groups equitably, avoiding the reinforcement of societal biases. Developers must strive to eliminate bias in data and algorithms to ensure benefits are accessible to all. For instance, algorithms trained on historical data might reproduce biases, like disproportionately showing STEM career ads to men.

Transparency & Explainability

AI operations should not be “black boxes". Transparency ensures people know when AI is being used. Explainability (especially in high-stakes fields like healthcare or finance) means humans must understand the AI's reasoning, even though modern models are often too complex for creators to fully explain.

Accountability and Human Oversight

Ultimate responsibility remains with humans; it cannot be abdicated to algorithms. Developers or deployers must be accountable for outcomes, often by keeping humans “in the loop”. Mechanisms like responsible officers for AI systems or liability insurance are being explored by policymakers.

Safety and “Do No Harm”

AI must not pose undue risks, covering physical safety (e.g., autonomous vehicles) and psychological/social harm. A 2025 study found that AI therapy chatbots often breached professional ethics by mishandling users in crisis or reinforcing negative sentiments.

Privacy and Data Protection

Since AI relies on vast amounts of personal data, respecting privacy rights is a cornerstone. This involves obtaining data fairly, securing systems against breaches, and ensuring AI is not deployed for mass surveillance that violates civil liberties.

Sustainability

Ethical development calls for monitoring and mitigating AI’s environmental footprint, as training large models consumes significant energy and water. AI should align with broader social goals and be a positive force for future generations.

Awareness and Education

Many frameworks stress the need for AI literacy and public engagement. People must be informed about what AI is doing and empowered to question or challenge its use. Mandatory labeling of AI-generated content is also key.


Practical Challenges in Implementing Ethical AI

While the principles are clear, the reality of building and deploying ethical AI presents difficult technical and organisational hurdles:


Algorithmic Bias and Inclusive Design

AI systems often perpetuate or amplify biases present in their historical training data, leading to real harm like unfair targeting for police scrutiny or discrimination in hiring. Researchers must categorise and eliminate these biases, though experts acknowledge eliminating all bias is extremely difficult.


Dr. Day offered two general tools for promoting inclusive design:


  1. Question Automation: Consider if the decision-making process should be automated at all. Humans, while biased, are arguably easier to hold accountable than machines.

  2. Community Involvement: Ask the people being evaluated or affected how they would like the system to operate. Dr. Day shared an inspiring example where a clinical geneticist, Dr. Karen Low, consulted the GenROC Consortium (parents of children with neurodevelopmental genetic conditions) on all machine learning research ideas, and the consortium was included as a co-author on resulting papers.


The Black Box Problem and Accountability

The complexity of deep learning networks often means neither the user nor the deployer can fully explain why an AI made a critical decision, undermining accountability and due process. Critical decisions affecting people’s lives demand explainability. This challenge forces a trade-off: researchers may need to sacrifice some model accuracy to gain interpretability.


Furthermore, when AI causes harm, assigning liability is complex due to the long chain of contributors (data engineers, algorithm designers, users).


The core message, however, remains: humans cannot abdicate responsibility to algorithms.

The Impact of Financial Incentives

A common recurring theme in ethical discussions is capitalism and financial incentives. Dr. Day notes that the “move fast and break stuff" culture means ethics often becomes a “necessary diversion" to the goal of profitability. Companies may even use ethical concerns strategically; for example, large tech firms might advocate for regulations they already comply with, thereby restricting smaller companies trying to compete.


Exploiting the Workforce

While AI brings productivity gains, it threatens to displace many jobs. Anthropic’s CEO has predicted AI could replace up to 50% of entry-level white-collar roles in the next five years. Ethical development requires ensuring gains do not widen inequality. Possible remedies include investing in retraining and upskilling programs for impacted workers.


Perhaps the most harrowing challenge revealed is the creation of unethical jobs used to train AI models. Dr. Day detailed the plight of Mechanical Turk workers (often outsourced to the Global South for cheaper labor) who perform menial data labeling, including reviewing and classifying deeply disturbing content to align large language models.


In one dreadful anecdote from the book Empire of AI, a Turk worker in Western Kenya, whose job involved labeling sexual content, struggled severely. Months later, the very model he helped make safe (GPT-3) was released, leading to the disappearance of the writing contracts held by his brother — his sole source of support. The worker asked: “I'm very proud that I participated in that project to make chat GPT safe, but now the question I always ask myself, was my input worth what I received in return”. Dr. Day stressed that companies like Meta, offering signing bonuses in the tens of millions to AI engineers, “can probably afford to pay the data labelers more".



The Landscape of Governance and Regulation

The global regulatory landscape is rapidly evolving, with different regions taking distinct approaches.


European Union – The EU AI Act

The EU has taken a pioneering role, developing the world’s first comprehensive AI law. It uses a risk-based approach (categorising AI systems as unacceptable, high, limited, or minimal risk).


  • Unacceptable-Risk AI (Banned): Include AI systems for social scoring of individuals, the exploitation of vulnerable groups (like harmful AI toys), and intrusive real-time biometric surveillance in public.

  • High-Risk AI: Systems used in critical infrastructure, medical devices, or employment decisions are not banned but are subject to strict oversight, including conformity assessments and human oversight.

  • Limited-Risk AI: These are subject to less stringent transparency requirements: developers and deployers must make sure that end-users know they are interacting with AI (such as chatbots and deepfakes).

  • Minimal-Risk AI: Are not regulated (including most AI applications currently on the EU single market, such as AI-powered video games and spam filters – at least as of 2021; this is evolving with generative AI).


United Kingdom – A pro-innovation approach

In contrast, the UK initially adopted a more flexible and sector-specific approach, encouraging existing regulators in finance, health, and transport to interpret core principles like safety and fairness. The rationale was to maintain “critical adaptability" and avoid overbearing legislation that might stifle innovation. However, the UK has recently proposed binding measures on those developing the most powerful AI models.


United States

The US currently has no single federal law dedicated to AI ethics. It relies on a patchwork of existing laws (e.g., anti-discrimination). In 2022, the White House released a non-binding “AI Bill of Rights” blueprint. All 50 states have introduced some form of AI-related legislation by 2025.


Global Efforts

International organisations like the OECD and UNESCO have established broad ethical principles. However, regulating AI remains fundamentally difficult because the core issue lies in auditing and validating whether principles like transparency and non-bias have actually been violated.



Toward Responsible and Beneficial AI

Building a responsible future with AI requires a multi-stakeholder effort involving technologists, governments, corporations, and civil society.


Technologists must design AI with ethical considerations from the ground up. Dr. Day highlighted a good example of this: Te Hiku Media in Aotearoa New Zealand, which sought guidance and consent from Māori elders and the wider community before collecting data. In just 10 days, the community contributed over 300 hours of transcribed audio. Te Hiku governed the data with a Kaitiakitanga licence — keeping it for the benefit of Māori rather than open-sourcing it — showing how early stakeholder engagement and data sovereignty can align technology with communal goals.


Crucially, the public has a massive role to play. We cannot trust providers to optimise for social harmony. AI business models are highly concentrated in just a few firms, often developed in teams lacking cultural diversity.


As consumers, we must exercise our power:


  • Vote with your feet: Be discerning about which AI chatbots, websites, and apps you use, supporting those with values you endorse.

  • Increase AI Literacy: Public engagement and education are essential. Dr. Day emphasises that ethical discussions should be non-confrontational and inclusive, bringing people from different backgrounds together to share how data has affected them.

  • Take Action: Write to your local MP to make your voice heard on AI safety regulation.


As Kieren summarised, “Your understanding of AI rights shapes how policymakers act”.



Further Reading Recommendations (from Dr. Huw Day):


  1. Weapons of Math Destruction by Cathy O'Neil: An extremely pertinent read documenting how algorithmic bias impacts everyday life.

  2. Data Feminism by Catherine D'Ignazio and Lauren Klein: Approaches data science from a social science lens, discussing diversity and the importance of community engagement.

  3. Empire of AI by Karen Hao: Documents the development of large language models from a non-technical perspective, including the politics, environmental impact, and the grim realities of Mechanical Turk workers.



If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!

Comments


bottom of page