Search Results
26 results found with an empty search
- Is AI About to Take My Job?
In our latest episode, we tackled the question that’s on everyone’s minds: is artificial intelligence about to take our jobs? This is a topic that came up frequently during our recent interviews with the public , highlighting the widespread anxieties surrounding AI’s growing capabilities. Why Now? The timing of this episode is crucial. As we move into 2025, we’re witnessing major AI companies rolling out new “agentic” AI models—systems that can plan, reason, and act without constant human oversight. Unlike standard AI (think facial recognition or old-school search engines) that tackles only single-step problems, agentic AI takes initiative: it not only understands and describes tasks but also decides on its next steps. It’s a progression that some believe is nudging us ever closer to Artificial General Intelligence (AGI), potentially more quickly than anyone previously predicted. To dive deeper into what AGI really means, be sure to check out our short stuff episode on the topic . A Look Back: Lessons from Technological Unemployment To shed light on the possible future impact of AI, we first turned our attention to history. We traced major waves of technological change—from the printing press and mechanised weaving to the steam engine , assembly lines , and the digital revolution —to see why these moments caused massive job shifts and also sparked new industries. Decades ago, a Russian-American economist made a striking analogy: once steam engines rendered horses largely obsolete, it would only be a matter of time before technology outpaced human labour in a similar way. “ The human worker will go the way of the horse." - Wassily Leontief Even the term “technological unemployment” was popularised by John Maynard Keynes nearly a century ago. History shows that people’s concerns about being “replaced by machines” have been around for generations. Two Major Forces Throughout history, technological advancements have brought about two significant forces that influence the world of work: Substituting force New technologies often displace certain human tasks and skills. For instance, the printing press replaced scribes, and mechanised textile machinery greatly reduced the need for hand-weavers. Complementing force Over the longer term, however, technology has repeatedly created new roles and resulted in higher productivity. Cheaper textiles, for example, gave rise to booming consumer demand, factory expansions, and new jobs in related sectors like machine maintenance and product distribution. Yet, as we discussed, the benefits are not always evenly shared. Economists warn of a “hollowing out” effect: many middle-skill jobs vanish while high and low-skill roles grow, widening the gap and challenging the workforce to keep up. Agentic AI: The Game Changer We then focused specifically on agentic AI , differentiating it from standard AI. Standard AI operates as a single-step problem solver (e.g., facial recognition, song recommendation) where humans remain in the driver's seat. In contrast, agentic AI is a multi-step problem solver with a feedback loop , capable of acting without constant human intervention. Examples range from the humble Roomba to sophisticated AI chess players and new search engines. The development of agentic Large Language Models (LLMs) , capable of planning, browsing the web, and reasoning, marks a significant step. Early forms of this, like Auto-GPT and BabyAGI , were even developed by the open-source community in 2023. The release of “ deep research" modes by companies like Google (with Gemini in December 2024) and OpenAI (with ChatGPT in February 2025) showcased the power of these models to conduct complex research independently, saving significant time and effort. This ability of agentic AI to act without human oversight is a core concept. The potential of agentic AI has led to serious discussions, even prompting an open letter in 2023 calling for a pause in large-scale AI development . Furthermore, prominent figures like Mark Zuckerberg predict that by 2025, AI could start replacing mid-level programmers and software engineers . Why AI is Different Looking back at the printing press or the steam engine, humans ultimately found new tasks that only they could perform—or, at least, tasks the technology of the time couldn’t handle. AI, however, is evolving to do more and more tasks once considered uniquely human. That includes the new jobs created by the complementing force; in theory, AI could also learn these roles just as quickly. Moreover, agentic AI models can learn by doing, fix their own errors, and run continuously without needing breaks. This velocity of improvement sets them apart. We also discussed Moravec’s paradox , which reveals that AI struggles with tasks humans find simple (such as motor skills, basic perception, and face-to-face interaction) while excelling at what humans consider incredibly difficult (complex calculations, large-scale data analysis). This mismatch makes job displacement far from intuitive. Another element is energy cost . Unlike prior machines that quickly replaced labour for economic reasons, current large language models demand huge computing resources and energy. As a fun side note, OpenAI itself doesn’t expect to see positive cash flow until 2029 , hinting that high operating costs may slow full-scale AI takeover...for now. Is Your Job at Risk? A Rule of Thumb We offered a simple framework: ask yourself three questions to gauge how vulnerable any given role might be to AI. Is it easy to define the goal of the task? If yes, the AI knows exactly what “success” looks like. Is it straightforward to understand or measure when that goal has been achieved? Clear metrics (e.g. code compiling without errors, a completed report) make automation easier. Is there a lot of data on the task for the AI to be trained on? Abundant, high-quality data—like code repositories, research papers, or market reports—enables AI to learn quickly and reproduce strong results. Applying these, we found roles in software coding , research & analysis , writing & content creation , and customer support & administration particularly exposed. Surveys from 2025 show about 90% of US workers worry about AI’s impact on their job security, and around 40% fear complete job elimination. Looking to the Future: ACI, AGI, and Beyond A major idea in this episode was Artificial Capable Intelligence (ACI) , a term popularised by Mustafa Suleyman, CEO of Microsoft AI. He even suggests a modern Turing test: hand an AI $100,000 in seed capital and see if it can grow this to $1 million on its own. This underscores how deeply AI may soon integrate with (or entirely replace) human decision-making. Digital vs. Physical Though AI has made leaps in digital tasks, it remains limited with physical tasks. Humanoid robots still struggle with the flexibility, dexterity, and real-time adaptability humans take for granted. Yet, progress here is speeding up. If (or probably when) advanced robotics converge with agentic AI, a far broader range of jobs could fall into the “substitutable” column. A “Deep Utopia”? We also explored the concept of a future where machine intelligence solves all of our problems, leaving human labour largely unnecessary—a scenario dubbed “deep utopia” by Nick Bostrom. The question then becomes: What gives life meaning when work disappears? Some philosophers predict we’ll shift focus to pursuits we now see as leisure—art, exploration, community projects. Others note that certain roles, often summarised as the “three Ps” (priests, prostitutes, parenting), might resist automation because society inherently values the human touch. The Journey Continues AI’s potential to reshape the workforce may feel daunting, especially as agentic AI proves itself capable of handling increasingly complex tasks. Yet, if history has taught us anything, it’s that there's more than meets the eye when it comes to technological unemployment. While jobs will inevitably evolve or disappear, there could also be new opportunities on the horizon, especially for those who adapt by honing skills AI struggles to replicate. We hope this episode clarifies the mechanics of agentic AI and the broader conversation around jobs, displacement, and what the future might hold. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- What the Public Thinks About AI (Live at We The Curious)
In this very special episode, we ventured out of the studio to capture the public's thoughts and feelings about artificial intelligence. This unique event gave us the perfect chance to speak with the public directly, while also debuting our first-ever interactive generative AI exhibit. Why We Must Explore Public Perceptions of AI As AI is rapidly integrated into various aspects of daily life, faster than any other technology, understanding public sentiment is crucial to addressing societal concerns and maximising benefits. Instead of solely relying on existing surveys, we conducted our own interviews in collaboration with We The Curious , Bristol's renowned interactive science centre . Being situated within their Open City Lab , we loved their approach of framing the public as the experts, rather than us being the “AI guys" with all the answers. Alongside the interviews, we even created our own interactive generative AI stand . This featured a keyboard and a big blue “ generate" button, inviting people to create images based on prompts like “ Can you create an image of someone writing with their left hand?". This particular prompt was designed to highlight potential biases in AI models . It was fascinating to see people, especially children, engaging with this technology for the first time and sparking conversations about the ethical considerations of easily generating any image. Throughout the episode, we played snippets of these live recordings, capturing the genuine voices and diverse perspectives of the people we spoke to. We also gathered responses via digital forms, ensuring we heard from a wide range of individuals. Key Questions and Insights Throughout our conversations, we linked participants' responses to broader studies conducted by major institutions, including the UK government's Public Attitudes to Data and AI Tracker Survey , the Alan Turing Institute's Understanding Public Attitudes to AI , GovAI's report on Public Opinion on AI , and YouGov's survey on AI associations . When you hear “AI” what’s the first thing that comes to mind? We kicked off our interviews by asking participants to share their immediate thoughts on AI. Responses were incredibly varied: Many adults immediately referenced generative AI tools like ChatGPT. Younger interviewees thought more visually, imagining playful concepts like “ a pig in space" or “ the cutest cat ever." The prevalent emotions ranged from excitement about potential benefits to concerns of AI becoming “ frightening." It was interesting to note how many children had already heard of and even used ChatGPT. We discussed how public perception of AI seems to have shifted from the idea of physical robots to the more abstract concept of AI capabilities, particularly chatbots and image generators. This is a recent change, as historically, people might have more readily associated AI with embodied robots. Interestingly, this aligns closely with broader government surveys showing similar sentiments, with excitement often balanced by fear or apprehension. Word cloud of public sentiment towards AI by UK adults, Wave 4 (visualising the top 50 most often mentioned words) ( GovUK, 2024 ) Have you noticed AI impacting your daily life in any way? We explored how individuals perceive AI impacting their everyday experiences: Several people noted using AI regularly at work, particularly generative AI tools for writing and creative projects. Younger participants mentioned using AI for educational purposes, highlighting its potential but also raising concerns about becoming overly reliant on technology. One adult expressed scepticism about “ hidden AI ," notably recommendation algorithms and targeted advertising—though few fully recognised these as AI applications. This aligns with studies showing a significant percentage of people frequently use AI, especially chatbots and image generators, often without formal training. Awareness of AI over time (Showing % selected each option) ( GovUK, 2024 ) What excites you most about AI, and what concerns you the most? Our question about what excites and scares people most about AI revealed a leaning towards concerns, though excitement around the potential for positive applications was also present: Excitement centred around AI's potential to create, educate, and simplify daily tasks. Key concerns included environmental impacts (energy consumption), ethical dilemmas around misinformation, and fears of job displacement. Our observations mirror recent studies, particularly highlighting increasing public awareness and concern about AI's environmental impact, even as global surveys emphasise data privacy and security as top worries. Opinion on the impact of AI on the following situations (Showing % selected each option) ( GovUK, 2024 ) Should we educate children more about AI? There was a strong consensus among the adults we spoke to about the importance of educating children about AI: Parents overwhelmingly agreed children should be taught about AI, stressing the importance of understanding technology's benefits and risks. Concerns were raised about potential manipulation, the need for boundaries, and the importance of understanding the technology to navigate the future. However, they also pointed out a significant hurdle: many educators themselves lack AI knowledge and the time to teach it effectively. This concern was reflected in wider surveys . Teachers' surveys in America indicate a similar belief in the need for AI education before the age of 18, but also highlight a lack of time and resources to implement this effectively. Would kids like an AI homework helper? When asked if they'd like an AI to help with homework, the response from the children was an enthusiastic yes! Many highlighted AI's potential to personalise learning, make homework easier, and reduce stress. One insightful young person even pointed out that homework takes away from family and relaxation time. These insights underline the attractiveness of AI's personalisation capabilities, something increasingly recognised in educational technology studies. If you could solve one world problem with AI, what would it be? When asked to design an AI system to solve a major world problem, interviewees gave some thoughtful answers: Climate change, recycling, healthcare (especially mental health), and medical breakthroughs (like curing cancer) topped the list. Online responses echoed these themes and included using AI in court for verdicts and ethical healthcare solutions. We also highlighted existing positive applications of AI in areas like reducing healthcare wait times and providing easier access to mental health support. Robot pets vs. real pets? Finally, as a fun wrap-up, we asked kids if a robot pet would be as fun as a real pet. Responses were optimistic and indicated a nuanced understanding , highlighting robot pets' unique abilities, like replicating lost pets or learning new tricks , alongside limitations such as lacking warmth and affection. One child even considered the possibility of a transforming or flying robot pet. Their responses gave a compelling glimpse into future human-AI interactions and our evolving relationship with technology. Final Thoughts Our interviews revealed an insightful snapshot of public sentiment toward AI, capturing a blend of cautious optimism and realistic concern. By connecting these personal insights with broader studies, we've underscored a crucial truth: As AI continues to rapidly integrate into society, understanding public perception will be key to navigating its benefits and challenges effectively. We had a fantastic time recording this live episode and plan to do more in-person events in the future, so keep an eye out!. We'd like to extend heartfelt thanks to We The Curious for partnering with us to facilitate these enlightening conversations. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- The AI That Runs the World
In this episode, we dive into the pervasive world of recommendation algorithms and explore how they are increasingly shaping our daily lives. From the moment we wake up to the time we go to sleep, these AI systems are constantly at work, influencing our choices and experiences. What are Recommendation Algorithms? Recommendation systems are automated tools designed to suggest the best items or content based on an individual's preferences and past behaviours. They aim to solve the problem of information overload by filtering the vast amount of data available and presenting users with what they are most likely to be interested in. The first-ever recommendation system, known as the Grundy system, was created in 1979 and it worked using a predefined set of rules to suggest books within a library. But today, most recommendation systems use sophisticated deep-learning methods to learn patterns from vast amounts of data. There are two primary ways of suggesting content: User-based filtering: Recommends items based on the preferences of similar users. For instance, if a system identifies that cyclists tend to like non-alcoholic beer, it will recommend non-alcoholic beer to new cyclists. Item-based filtering: Recommends items similar to those a user has previously shown interest in. For example, if you watch a lot of rom-coms on Netflix, the system will recommend other movies in that genre. Most of today's recommendation systems combine these two methods into a hybrid system. Where are They Used? Recommendation algorithms are used in various applications, and some examples we identified include: Music Streaming: Spotify uses algorithms to recommend new music and curate playlists. Search Engines: Google's search results are tailored based on your past activity. Predictive Text: Autocomplete features on your phone use algorithms to predict the next word you want to type. Social Media: Platforms like Facebook, Instagram and LinkedIn use algorithms to curate the content that you see on your feeds. Email: Email spam filters use algorithms to prioritise and filter your emails. News: News apps like Apple News and BBC News use recommendation algorithms to suggest relevant news stories. Entertainment: Streaming services like Netflix use algorithms to recommend movies and shows based on your viewing history. Navigation: Mapping apps like Google Maps use algorithms to suggest routes and locations. How Are They Trained? Recommendation algorithms are trained using historical data on user interactions. This data includes what users click on, how long they look at content, what they share, and how they rate products. The systems then use this data to identify patterns and predict what a user will do next. The more data the system gathers, the better it gets at understanding your behaviour. For example, short video platforms like TikTok have some of the most advanced algorithms because they have access to so much data on user behaviour. Are They Serving Us or Persuading Us? While recommendation systems are designed to help us navigate the vast amount of information available online, we discussed whether these systems are actually serving our best interests, or are they simply persuading us to spend more time on these platforms. It's important to remember that these free platforms generate revenue through advertising. Thus, their primary goal is to maximize user engagement, which often means maximizing clicks and time spent on the platform. For example, 60% of TikTok users spend an average of 10 hours per week on the app. These systems learn from user behaviour, and this can lead to some concerning trends. For example: Polarisation: Recommendation systems can create echo chambers where users are only exposed to information that confirms their existing beliefs, leading to significant polarisation. Negative News: People are more attracted to negative news and morally controversial content, so this type of content gets circulated more, and this is what the algorithms pick up as interesting. Key Considerations for the Future The episode concludes with a number of key points that we should consider as users and citizens interacting with these platforms: Transparency : It is important that users have the ability to see how these systems are working and what data they are using from us. For example, X, previously Twitter, open-sourced their recommendation system, making the algorithm available for anyone to see. Fairness : These algorithms must be designed to mitigate bias and ensure equitable outcomes for all users. Privacy : It is vital that personal information is anonymised and that people have some control over their data. Free Will : Recommendation systems can influence choices, and we must consider if these systems are serving us or persuading us. We mentioned a story where the US supermarket Target predicted a teenager's pregnancy before the teenager's father even knew. Regulation: As these systems become ever more powerful, it is important that governments put regulations in place to ensure that these systems are safe and beneficial to society. This includes greater transparency, user control, and restrictions on how much data a company can collect and sell. The key message is that recommendation algorithms are a vital part of how we interact with the digital world, and they are not going away. We need to be aware of the impact that they have on our daily lives and make sure that their development and application is done in a responsible way. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- An AI a Day Keeps the Doctor Away
In this episode, we explored the exciting and complex world of artificial intelligence in healthcare. We looked at the history of AI in medicine, current applications, ethical considerations and future possibilities. A Look Back The use of AI in healthcare is not a recent development. As early as 1971 , computers were employed to automate medical tasks, such as diagnosing symptoms. By 1976 , AI systems were developed to recommend antibiotic treatments for bacterial pathogens. The early 2000s marked a significant turning point with the Human Genome Project and the adoption of high-throughput measurement techniques, which generated vast amounts of data. By 2015 , data-driven chatbots like Pharmabot were being utilised to provide information on medicines. In 2017 , the US Food and Drug Administration (FDA) approved the first AI-powered tool for clinical settings, which analysed heart MRI scans using AI trained on big data. A major breakthrough came in 2020 with DeepMind’s AlphaFold, which solved the long-standing challenge of protein folding prediction, unlocking new possibilities for understanding the fundamental components of life and fighting disease. Current Applications of AI in Healthcare AI is currently being used in various ways to improve healthcare: Harvard’s School of Public Health estimates that using AI to assist with diagnoses may reduce treatment costs by up to 50% and improve health outcomes by 40%. Diagnosis and Treatment: AI can analyse medical scans (X-rays, MRIs, etc.) to detect diseases like cancer and heart conditions, sometimes spotting subtle patterns that humans might miss. For example, AI can identify those with a high risk of fatal heart disease up to five years in advance . AI is also being used to help prevent blindness in people with diabetes by predicting diabetic retinopathy . Patient Engagement: AI chatbots can help improve communication between healthcare professionals and patients, bridge language barriers, and assist with medication adherence. Administrative Activities: AI can automate administrative tasks, freeing up healthcare professionals to focus on patient care. Drug Discovery and Design: AI is being used to speed up the drug development process, identifying new compounds and predicting how potential drugs might behave, potentially reducing costs and time. This includes the discovery of entirely new classes of antibiotics . Understanding AI's Decisions AI explainability refers to the ability to understand and clearly communicate how an artificially intelligent system makes its decisions or predictions. There is often a trade-off between AI performance and the ease of explaining its decision-making process. Complex AI models can achieve high performance but are harder to explain, while simpler models are more understandable but may perform less effectively. One method for explaining AI decisions uses a ‘what if’ approach, altering input data to observe changes in the AI’s output. This technique generates heat maps that highlight where the AI is focusing, though not necessarily why it makes specific inferences. Advancing more explainable and interpretable AI models requires collaboration between clinicians and computer scientists. An example of the heat map explainability method used when detecting clinically significant abnormalities within chest radiographs, where colour represents the AI’s confidence ( Hong et al., 2023 ). Ethical Challenges AI in healthcare brings transformative potential, but it also poses significant ethical challenges that must be addressed to ensure its safe and equitable use. Bias: AI systems are trained on data that may not fully represent the population, leading to bias and inaccurate diagnoses for underrepresented groups—a critical issue requiring urgent attention. Transparency: AI systems sometimes identify patterns based on irrelevant factors, such as differences in image formats across hospitals, rather than genuine clinical indicators. This raises concerns about transparency and the need to ensure AI learns meaningfully. Overdiagnosis: While AI can detect conditions at very early stages, such as cancers that are not immediately life-threatening, this can result in unnecessary treatments for patients who may not require them. Limitations: AI systems have sometimes struggled to perform outside of research settings, such as during the COVID-19 pandemic . Poor-quality data, lack of diversity, and insufficient labelling in training datasets are significant contributing factors. The Future of AI in Healthcare The general consensus is that AI will enhance, rather than replace, the decision-making and execution skills of healthcare professionals. AI systems serve as tools to help healthcare providers gain deeper insights and make more informed decisions. The human element of healthcare remains indispensable, as the human touch plays a critical role in supporting patients’ mental well-being during diagnosis and treatment. Multimodal AI is an emerging area of research that seeks to develop systems capable of integrating and interpreting diverse types of healthcare data—such as images, test results, and patient histories—over time, to provide a more holistic view of a patient’s condition and hopefully improve outcomes. Final Thoughts AI has immense potential to revolutionise healthcare by improving efficiency, reducing costs, and enabling personalised care. However, addressing ethical challenges is crucial to ensure these systems remain transparent, unbiased, and serve as tools to support, rather than replace, healthcare professionals. The future of AI in healthcare is bright, with innovations in multimodal AI poised to bring unprecedented sophistication to the field. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- Hidden AI: How Algorithms Influence our Daily Lives (Live in Bristol)
Hi everyone, welcome to a special recap of our first-ever live episode of Artificially Ever After, recorded at the Bristol City Hall! We were extremely honoured to be in front of a live audience for our 10th episode, diving into the growing world of hidden AI . In this episode, we explored how algorithms influence our daily lives, and we also looked ahead at the rise of generative AI and how to prepare for the coming wave. What Do We Mean By Hidden AI? We kicked things off by defining what exactly we mean by “hidden AI." These are the AI systems integrated into our daily routines, often without us realising we are interacting with them. Unlike visible AI, like chatbots or self-driving cars, hidden AI operates in the background, making decisions that affect our lives without our explicit knowledge. Examples of Hidden AI include: Recommendation systems on social media platforms Algorithms that decide credit ratings when you apply for a loan Payroll assessments in the justice system We focused specifically on recommendation systems, as they are the most pervasive example of hidden AI. Recommendation Systems: From Libraries to Social Media We took a step back to understand how recommendation systems worked before AI. Previously, experts like librarians defined “if-then" rules to suggest books based on reader preferences. These early systems aimed to satisfy the user's specific needs. However, modern recommendation systems have shifted their focus to maximizing engagement. Now, algorithms track micro-behaviours like how long you spend on a post, rather than focusing on individual needs, with the goal of keeping you on the platform for as long as possible. Key changes in Recommendation Systems: Shift in Goal: From user satisfaction to maximizing engagement Profiling: Algorithms now focus on micro-behaviours rather than user-defined preferences Loss of Human Element: The 'expert' opinion is lost, replaced by deep-learning algorithms identifying patterns in human behaviour This shift has led to some surprising outcomes. For instance, recommendation systems have been able to predict if someone is expecting a baby based on online activity. Platforms like TikTok have achieved incredibly high engagement rates, with some users spending over 10 hours per week on the app. This increased engagement, while beneficial for platforms, raises questions about whether these systems are truly serving us. Studies have shown that switching from chronological to algorithmic feeds leads to significantly more time spent on social media. Expectations vs. Reality We discussed how recommendation algorithms, initially built to increase connectivity and information sharing, have led to unexpected consequences. Below, we highlight some key discrepancies bet ween the initial promises made by the companies developing these systems and the actual outcomes: Connectivity: More online interaction, but less physical time together Information Sharing: Democratisation of publishing opinions, but also proliferation of fake news Community Building: Access to global communities, but increased polarisation and echo chambers The core issue is that deep-learning algorithms are designed to meet specific goals but without necessarily considering the broader impact. Generative AI: The Next Frontier We then looked ahead to generative AI , which is becoming increasingly integrated into society. While exciting, this technology raises new challenges. We highlighted that while generative AI is being used to democratise education, provide emotional support, and generate content faster, there are also pitfalls. One concern is the potential homogenisation of ideas if everyone uses the same AI tools. A key point is that generative AI is becoming increasingly integrated into the hidden AI category . For instance, Google's AI overview feature provides AI-generated summaries of search results, which can sometimes lead to inaccurate information . Additionally, AI is being used in biology research to discover new antibiotics , but without fully understanding how these systems work, there are dangers. The concern is that trust in online content could collapse . By 2026, up to 90% of online content could be synthetically generated . This has the potential to undermine the benefits of the digital world. What Can We Do? We explored what can be done at the government, industry, and individual level. Possible Actions: Government: Algorithmic transparency policies, similar to the EU's GDPR data protection laws Industry: Slowing the “arms race" to develop the most capable AI, with more robust testing before release Individuals: Learning the core concepts behind AI systems, making informed decisions about online behaviour and being mindful of the information you consume Audience Questions and a Few Fun Facts We also had some great audience questions during the live recording. The audience brought up issues such as how to optimise algorithms for positive influence, balancing efficiency with exploration, the spread of political misinformation, and how to improve algorithm satisfaction. As is tradition, we also had our fun fact segment , which included: A trading algorithm developed by a YouTuber using a goldfish named Frederick that actually outperformed the NASDAQ A football team in Scotland that used an AI ball-tracking system that would often focus on the bald head of a linesman The Japanese word for “love" is “AI" Final Thoughts We are in a crucial moment where we need to think critically about how we integrate AI into our lives. As we've seen with social media, technology implementation comes with responsibility. We hope this episode provided some valuable insights and encouraged everyone to engage with these issues. We'd love to hear your thoughts, so please do check out the full episode and reach out on our socials! Where to learn more? Some great books: Weapons of Math Destruction by Cathy O'Neil Automating Inequality by Virginia Eubanks Scary Smart by Mo Gawdat The Shortcut by Nello Cristianini The Alignment Problem by Brian Christian If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- AI in Warfare: Data is the New Ammunition
In our latest episode, we delved into the complex and increasingly relevant topic of AI in warfare . We aimed to demystify the subject, moving beyond scary headlines to explore the history, current applications, and ethical considerations surrounding the use of artificial intelligence in military contexts. The Historical Roots of AI in Warfare We started by exploring the surprising historical links between technological innovation and military investment. Key examples include: Alan Turing's work at Bletchley Park during World War II : His efforts to crack the German Enigma code led to the development of the Colossus computer, a crucial tool for the Allied forces. This work also laid the foundation for modern computing and cemented Turing as a founding figure in AI. ARPANET : Developed by the US Advanced Research Projects Agency (now DARPA) in the 1960s, ARPANET was initially designed to facilitate secure communication between military personnel and research institutions. It became the precursor to the modern internet. DARPA's Challenges: Since the 1990s, DARPA has funded challenges that encourage the development of technologies for military use. These include the DARPA Grand Challenge, which led to the development of autonomous vehicles like Stanley, and robotics challenges that spurred companies like Boston Dynamics. The Third Offset Strategy : In 2014, the US Department of Defense launched this strategy, heavily investing in the development of artificial intelligence and autonomous weapons. Current Applications of AI in Warfare: Beyond the Battlefield The “Three D's" : AI is being used for dull, dirty, and dangerous tasks, like transcribing communications, identifying objects in video footage, and operating in battle zones contaminated with biological or chemical weapons. While autonomous weapons often dominate the narrative, AI is being used in many other ways. These include: Cyber Warfare: AI plays a role in social media algorithm manipulation and the creation of deepfakes, which can influence public sentiment. Integration of Humans and AI Systems : AI systems are used to process the vast amounts of data collected by sensors on the battlefield and help humans make decisions. This aims to increase military power without increasing personnel, leading to greater efficiency. Command and Control : AI facilitates better decision-making and coordination of autonomous assets. Predictive Maintenance: AI is used to predict when equipment like warplanes need maintenance. Autonomous Vehicles: AI is used to operate unmanned aerial vehicles (UAVs) and unmanned underwater vehicles (UUVs). These can operate for long periods and go to places that are too dangerous or inaccessible for humans. Defense Systems: AI is used in defensive systems, such as the Iron Dome which intercepts incoming missiles. Target Prediction : AI is used to track individuals, using video surveillance, phone data, and other pieces of information to identify potential targets. Metrics for Success: Precision vs. Recall We highlighted that AI systems in general need a central goal and a metric to optimise for. Using target prediction as an example, we explored the implications of two important metrics: Precision : Measures the proportion of accurate positive predictions. A system achieving 100% precision will exclusively identify true targets, never mistaking a civilian for a target. Recall : Measures the percentage of all true positive cases detected. A system with 100% recall will recognise all the actual targets but might incorrectly identify some innocent civilians. The choice of which metric to prioritise has huge implications in warfare, where a focus on recall might lead to high civilian casualties, while a focus on precision might allow threats to slip through. Ethical and Legal Implications The ethical concerns surrounding AI in warfare are significant and include: Responsibility: When AI systems make errors, who is responsible? Is it the AI, the developer, or the human operator? Human Oversight: The importance of maintaining human oversight is crucial to not abdicate moral responsibility over to machines. The Lavender System: The use of the Lavender system in the Israel-Gaza conflict highlights the dangers of acting too directly on AI predictions without proper human review and decision-making. The need for regulation: There are currently no clear, globally recognised definitions of what autonomous weapons actually are, hindering international efforts to regulate their use. The potential for an arms race: The increasing investment in AI and autonomous weapon systems could lead to a dangerous arms race. The production “valley of death": The move to speed up the deployment of new AI technologies without considering the ethical and real-world implications is a cause for concern. The Future of AI in Warfare The trends are clear: defence budgets are increasingly being spent on AI and autonomous systems. We are likely to see a blurring of the lines between intelligence, surveillance, and command and control. As AI takes on more authority, there are many unanswered questions about ethical and robust systems. While some anticipate protection through the fear of a 'mutually assured destruction' scenario akin to nuclear weapons, it's also true that AI weapons are much easier and cheaper to produce, reducing the barrier to entry and potentially broadening the threat. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- Can AI Help Fix the NHS? (An Interview with the Former NHS AI Lead)
In this episode, we delved into a critical question: can AI help fix the UK's National Health Service (NHS)? We were excited to welcome our first-ever guest, Dr. Hatim Abdulhussein , a leading figure in AI innovation within the NHS. Hatim is the former national clinical lead on AI within NHS England, now on the board of the NHS AI lab, and CEO of Health Innovation Kent Surrey Sussex. In addition to all of that, he is also a practising GP, offering a unique perspective to this important discussion. The NHS: Challenges and the Promise of AI The NHS is facing significant challenges, with over 7 million patients on waiting lists and around 100,000 staff vacancies. The UK public widely feels that the NHS is struggling. The newly-elected Labour Party has pledged to build an NHS “fit for the future", and AI is seen as a key part of that. Key goals include cutting waiting times, increasing appointments, doubling cancer scanners, and recruiting more mental health staff . Hatim explained that the increasing and ageing population will likely lead to an imbalance between people needing care and the workforce available to care for them. He believes that technology is essential to address these challenges . How AI Can Help AI has the potential to transform many aspects of the NHS. Hatim highlighted several key areas: Triage: AI can help manage patient flow, ensuring people see the right professional at the right time. This can lead to longer GP appointments and quicker responses for patients. Cancer Care: AI can speed up processes related to cancer diagnosis and treatment, including imaging and treatment planning. Administrative Tasks: AI can automate tasks like note-taking and referrals. This can save time for healthcare professionals, allowing them to focus on patient care. Decision Support: AI tools can provide clinicians with information and suggestions during consultations. This can augment human capability and capacity. Improved Efficiency: AI can lead to a happier, less stressed, and less burnt-out workforce. The Importance of People and Process While the potential of AI is huge, Hatim stressed that technology alone is not the answer . The success of AI in healthcare depends on aligning people and processes with the technology. He emphasised that the human element is critical in healthcare and that AI should augment human capability and not replace it. He noted that the public generally wants more access to healthcare professionals. Near-Term Reality of AI Hatim clarified that AI will not be making diagnoses autonomously any time soon. The focus is on using AI to assist and augment human decision-making. AI tools can provide suggestions, but the final decisions will be made by clinicians in partnership with patients. Addressing Bias One of the biggest concerns around AI is bias. Hatim emphasised that AI systems can perpetuate existing inequalities if not carefully developed and implemented. To address this, it's essential to use diverse data sets and to be aware of bias at all stages of the AI lifecycle. This includes checking for bias during procurement, implementation, and use. Public Engagement and Education It's important for the public to understand how AI is being used in healthcare. Hatim advised that people should engage with groups involved in AI decisions and familiarise themselves with the basics of the technology. He suggested that seeing how AI can be practically applied helps people feel more confident about the technology. The Future of Healthcare Hatim envisions a future of healthcare that is fair, rational, and tailored to the individual. It will utilise a range of technologies including AI, genomics, biosensors, and the internet of things. This more holistic approach would enable earlier disease prediction and prevention, with care delivered in communities. He stressed the importance of using technology to allow healthcare professionals to focus on the patient and the relationship. Key Takeaways AI has the potential to help address some of the most pressing challenges facing the NHS. AI should augment the work of healthcare professionals, not replace them. It’s vital to address bias and ensure AI is used ethically and equitably. The public needs to be educated and involved in the discussion about the use of AI in healthcare. Looking Ahead We are excited to see what the future holds for AI in healthcare and we thank Dr. Hatim Abdulhussein for sharing his expertise with us. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- Is AI Art Real Art?
In this episode, we tackled a question that's been sparking debate and discussion: is AI-generated art “real" art? We explored the history of computer-generated art, the algorithms used to create it, and the ethical questions surrounding it. A Look Back It might seem like computer-generated art is a recent development, but it actually dates back to 1964, when a German mathematician named Georg Nees first used computers to generate art using basic shapes. In the early days, the art was produced using pre-computed formulas and algorithms, rather than through data-driven learning. A major shift happened around 2015 with the rise of deep learning AI . This allowed for AI to generate natural language captions for images and then, inversely, to generate images from text. Early attempts at generating images were low resolution, but technology has since improved rapidly, allowing for high-resolution, realistic images. OpenAI developed the first advanced AI art tool called DALL-E in 2021, but it wasn't released to the public until late 2022. Midjourney was actually the first to gain widespread adoption, releasing their model in July 2022. The use of Midjourney in an art competition win sparked controversy, highlighting the discussion around AI art. Today, multiple companies have produced AI models that use diffusion techniques. The quality of AI-generated art is improving rapidly, making it increasingly difficult to distinguish from human-created art. Demystifying Diffusion Models The way these models work is quite complex and abstract, but they can essentially generate brand-new content. Diffusion models start with random noise and then, through a sequential process, transform that into meaningful images. They are trained on massive datasets of images and their corresponding text captions, learning to associate features of the image with words in the caption. The diffusion process is like starting in a random spot in the middle of a desert with a compass and retracing steps to reach a specific location. Each time the path is different, resulting in a unique image. The compass in this analogy is the text prompt provided by the user. These models have many features, which allow for a vast number of image combinations, making them capable of generating a diverse range of content. In addition to generating images from text, these models can also do things like image-to-image generation, inpainting and outpainting. AI Tools for Art Image generation tools include DALL-E, Midjourney, and Stable Diffusion. Text-to-video models like Sora are also in development. Music composition tools are also being developed by companies like Open AI, Stable Audio, and YouTube. Ethical Concerns AI models can reinforce harmful stereotypes due to biases present in the training data. These biases can be further exaggerated by how people are represented online. There is concern that current copyright laws are outdated for handling AI-generated art. Many artists worry that AI art tools could encourage plagiarism . A significant percentage of artists and the public believe that AI-generated art should not be considered art. Is It “Real" Art? AI art is often original and unique, as it's generated from random noise and is not directly copied from the training data. The word “create" actually means “to form out of nothing" which aligns with the way these models generate images. Some believe the absence of intention, imagination and emotion is what makes AI art different. AI models are trained for speed and efficiency, which contrasts with the time and effort put into art by human artists. AI art may lack authenticity and story. There are arguments that the value of art lies in how much one is willing to risk to experience it. Ownership and the Future: The question of ownership is complicated, as it's unclear who owns AI-generated content - the user of the tool, the creator of the AI, or some other party. There are lawsuits in progress over copyright infringement regarding training data and mimicking artists' styles. Some models are starting to cite their outputs and there's a growing movement towards disclosing when content is AI-generated. There are concerns about how AI-generated images may be used for misinformation. Many believe that AI art tools will be used to assist artists, similar to cameras, photo editing software and other tools that allow artists to make new work. Final Thoughts This episode has highlighted that the landscape of AI-generated art is complex and still evolving. As these tools improve and become more readily available, it is increasingly important to understand how they work, and what are the ethical implications. While there is not one simple answer to the question of whether AI art is “real" art, we hope we have provided you with some food for thought. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- Will AI Let Us Talk To Animals?
In this episode, we explore a fascinating, if not widely discussed, application of AI: understanding and potentially communicating with animals . While it may not drastically change our daily lives like some AI applications, it presents a unique opportunity to connect with the animal kingdom on a deeper level. Why are we trying to talk to animals? The motivation behind this pursuit is multifaceted: Scientific Curiosity : Humans have always been fascinated by animal behaviour and communication. We seek a better understanding of the world around us, and animal communication is a key area we've yet to unlock. Breaking Down the Human-Animal Divide : By understanding how animals communicate and the emotions they express, we can move away from the idea of humans as completely distinct from other species. Spiritual Motivation : Humans are uniquely conscious, and understanding how other animals perceive the world could be incredibly insightful. It might offer fresh perspectives on life's big questions. Conservation Efforts : Understanding how animals perceive their environments could help with conservation efforts. This could reconnect us with nature and help preserve a variety of species. A Brief History of Studying Animal Communication Humans have long been interested in animal communication, with early accounts even appearing in the Bible. More scientific approaches began in the 1800s with Charles Darwin studying the emotional expressions of animals. In the 1960s, studies of dolphin vocalisations revealed complex clicks and whistles used for communication. However, in the 1970s, philosopher Thomas Nagel suggested that understanding animal experience was futile because we are limited by our human perspective. But now, we are exploring whether computers and AI can bridge that gap. How AI Translates Languages AI-powered machine translation has taken a new approach. Instead of relying on explicit rules of grammar like we do when learning languages in school, AI is trained on massive amounts of data, allowing it to infer rules and patterns on its own. This deep learning technique has led to the development of large language models . Embedding Spaces: Words are mapped into a “galaxy" of meanings where words with similar meanings are close together. These embedding spaces reveal that relationships between words (e.g. king to man, queen to woman) are consistent across languages. By aligning these “galaxies" we can translate between languages. Applying AI to Animal Communication The same principles behind AI-powered language translation can also be applied to animal communication. We aim to create a “galaxy" of animal sounds, where the position of a sound corresponds to its meaning or sentiment. AI may help to align these animal galaxies with the human language galaxy. Challenges There are several challenges we must overcome before we can achieve this goal: Lack of Data : There's a massive disparity in the amount of data available for human languages versus animal communication. We need to collect more high-quality data. Context : We need to understand the context of animal sounds, observing their behaviours and emotions along with their vocalisations. This will help us to 'ground' their communications. The Cocktail Party Problem : It can be difficult to isolate individual animal sounds in environments with lots of background noise. Fortunately, AI is helping solve this problem. Current Projects Several exciting projects are underway using AI to understand animal communication: Project CETI : This non-profit is gathering large amounts of data to decode whale communication. The Earth Species Project : This group is focused on decoding the communication of other species. Ethical Considerations If we could talk to animals, several ethical issues must be addressed: Validating AI : How can we be sure that our AI translator is accurately interpreting animal communications? Complexity : Some animals may not have a complex language capacity, making some back-and-forth conversations difficult. Misuse of Technology : There is the potential for misuse with playback experiments or by poachers using synthesised mating calls to lure animals. Privacy : The technology to extract individual animal sounds may have potential human espionage uses. The Future If we could talk to animals, what would it mean? It may bring us closer to animals, fostering respect for their environments. It may bring a different perspective on life, which may be inspiring. This technology has the potential to help in conservation, improve animal welfare, and give us a better understanding of the world. We hope this episode has given you a lot to think about! If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- Are We Running Out of Data for AI? (Featuring Encord)
In this episode, we tackle a question that's becoming increasingly relevant in the world of AI: are we running out of data? Initially, we planned to discuss how AI might be progressing faster than anticipated, but recent research suggests a potential limit to the current methods of AI development. This episode explores how the rapid advancements in AI have been significantly fuelled by scaling up models, compute power, and training data. But, is this sustainable? Recent AI Progress The recent progress in AI, particularly in large language models (LLMs), has been impressive. Let’s look at the progression of OpenAI’s models, for example: GPT-2 (2019): 1.5 billion parameters, trained on 40GB of text data. GPT-3 (2020): 175 billion parameters, trained on 570GB of text data. This model's capabilities led to the rise of platforms like ChatGPT. GPT-4 (2023): 1.7 trillion parameters; the training data size is not publicly known. These models show a clear trend of increasing size and, consequently, capability. However, it’s not just about scaling up the model size; there's also a crucial relationship between model size, data size, and training time. The Data Bottleneck While computational power and training time have been improving, we're starting to see a potential bottleneck with data. The amount of data on the internet is a finite resource, and we may be approaching the limits of what’s available. Public vs Private Web: AI models primarily train on publicly available online data, a fraction of the total data created. This includes websites, forums, blogs, and platforms like Reddit and Wikipedia. Private data like emails and personal messages are not accessible to these models. Common Crawl: The most widely used dataset is the Common Crawl, which contains about three billion web pages, compressed to around 100 terabytes, which is approximately 100 trillion tokens. Estimates: The public web contains an estimated 500 trillion tokens, while private data may hold around 3,000 trillion tokens. Current Usage : The largest training datasets for LLMs have reached about 15 trillion tokens, indicating that they aren't using the full Common Crawl due to data quality issues. What are tokens? Tokens are words, character sets, or segments of words and punctuation utilised by large language models (LLMs). In most tokenizers used today, 1 token approximately equals 0.75 words on average. Multimodality and Data Diversity The conversation then expanded to include other modalities beyond text, such as image, video, and audio data. There's an estimated 10 trillion seconds of video and 500 billion to a trillion seconds of audio on the public web. These different modalities provide context and information, enriching the data. How Long Until We Run Out? A study from Epoch AI suggests we could face a data shortfall between 2026 and 2032, with a median date of 2028. This is primarily due to the heavy reliance on text data, and the rate at which the data required for these models is growing versus how quickly data is created on the internet. Encord is Here to Help! The second half of the episode featured Oscar Evans , a machine learning solutions engineer from Encord , a data development platform that focuses on data curation and labelling. Encord helps companies manage their data at scale, emphasising the importance of data quality over quantity. Key takeaways from Oscar included: Data Quality is Paramount : Data quality is essential for specific use cases. Managing Data at Scale : Encord helps clients find and manage specific data within large datasets. AI to Understand AI : AI can be used to understand and categorise data sets. Multimodality : The fusion of different data types is essential for developing agentic AI models. Data Poisoning : Even very small amounts of poor-quality data can negatively impact models. High-Quality Labels : It is critical to have high-quality, human-reviewed labels for accurate results. Data-Centric Approach : Instead of focusing on simply scaling up data, focusing on the quality and relevance of data is more effective. The Future of AI Training We discussed how companies are beginning to use AI-generated or 'synthetic' data to train AI models. This is currently being done in areas where correctness is easily verified, such as coding tasks and mathematics. The potential risk is that models will collapse when they are trained repeatedly on previous versions of their own outputs. There's an increasing acknowledgement that simply scaling up models and data isn't a sustainable approach. New model architectures, data quality, and synthetic data will play an increasingly important role. Conclusion The episode revealed that while AI progress has been rapid, the current trajectory isn’t infinitely sustainable. As we approach data limits, there's a growing need to prioritise data quality, explore new modalities, and develop innovative methods for training AI models. The conversation with Oscar Evans from Encord highlighted that data is the key, and there's significant potential for progress by being more thoughtful about the data used to train AI models. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- How AI Can Help You Achieve Your 2025 Goals
Happy New Year everyone! In this episode of Artificially Ever After, we delve into how AI can be a powerful tool in achieving your 2025 goals. We're not just talking about lofty resolutions that fade by the end of January, but about using AI to make real, lasting change. New Year's Resolutions: A Look at the Numbers We started by looking at a recent survey from the Pew Research Center that explored who makes resolutions and how successful they are at sticking with them. Three-in-ten Americans made at least one resolution for 2024 , with half of that group making more than one. New Year's resolutions are more popular among younger people aged 18 to 29. Over half of those who didn't make resolutions said they simply don't like them . After one month, only half of the people surveyed had kept all their resolutions , and 13% had kept none . This highlights the challenges many face in maintaining their goals. Why Do We Fail at Keeping Resolutions? Two major reasons were identified for why people fail to keep their resolutions: poor goal-setting and a lack of motivation. Goal Setting: Often, goals are not specific, measurable, achievable, realistic, or time-oriented (SMART). The SMART framework was originally designed for professional settings. Overly outcome-focused goals can also lead to discouragement. Motivation: Many people set goals based on what they think they should do, rather than what they want to do. Lack of accountability, poor tracking, and insufficient resources also contribute to a lack of motivation. How AI Can Help AI offers some unique solutions to help you achieve your goals. Specificity: AI is now very good at breaking down large goals into smaller, manageable tasks. These systems can help you plan how to achieve your goal and suggest steps you might not have considered. Measurement: AI can track your progress over time, using data from various devices and apps to provide personalised feedback and adjust your goals as needed. Realistic Targets : AI models are trained on vast amounts of data, including many examples of people achieving similar goals, which allows them to set realistic targets and timelines. Motivation: AI tools can help you find your 'why', provide accountability, and track your progress, which can all boost motivation. AI can also act as a “sounding board" to help you introspect and solidify your goals. AI Tools for Your Goals We explored various AI-powered tools, categorised by common New Year’s resolutions. Health, Exercise, and Diet: Personal Training: Apps and chatbots, such as ChatGPT , can create training plans. Strava and Whoop use AI to provide personalised workout analysis. Smart Mirrors: MAGIC AI’s Fitness Smart Mirror provides real-time feedback on your exercise form. Diet Tracking: MyFitnessPal uses machine learning and computer vision to track calories from photos of your food. Personalised Nutrition: Zoe provides detailed nutritional responses based on your unique gut microbiome. Quitting Addictions: Quitbot provides support for quitting smoking. Money and Finances: Cleo : This app uses AI to analyse spending habits, categorise transactions, and provide personalised budgeting advice with a sense of humour. Whering : This clothing manager helps you track your wardrobe and facilitate a sustainable clothing marketplace. Relationships and Self-Help: Wysa : With over 5 million users and counting, this mental health app combines an AI coach with human professionals to offer support, techniques for managing emotions and encouraging mental well-being. Hobbies and Personal Interests: Shortform : This app provides summaries of books and offers learning exercises to help retain the knowledge. Duolingo : This language learning app uses AI to create interactive conversations. Anki : This flashcard app uses spaced repetition to optimise learning. Whisk : This app helps with generating shopping lists and meal plans. Blossom I: Helps you identify house plant diseases and offer care advice. Yoodli : This app is a speech coach, it gives feedback on how you speak and can be tailored to interviews, or other public speaking scenarios. Work and Career: Motion : This tool helps automatically schedule your calendar. A Word of Caution While AI offers many benefits, it's important to be aware of some potential drawbacks: Missing Meaningful Experiences: Relying too heavily on AI-generated summaries or shortcuts can lead to missing out on the process and the learning opportunities it provides. Over-reliance on Outcomes: Focusing solely on results can diminish the value of the journey and the skills you learn along the way. Final Thoughts AI is a tool, and like any tool, it should be used thoughtfully. The goal is to enhance our lives, not to replace the meaningful experiences that make us human. AI can help us track data and analyse it, act as a “rubber duck" to help clarify our problems, and provide useful data-driven nudges to keep us on track. As we move into 2025, let's embrace the potential of AI to help us grow, but without losing sight of the value of the process. We hope this episode has inspired you to set a goal, and perhaps achieve it with a little AI assistance. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!
- AI in Sport: Fair Play or Foul?
In this episode, we discuss the world of sports, exploring the growing influence of artificial intelligence and asking the crucial question: Is AI a game-changer or a step too far? From player recruitment to training regimes, tactical analysis, and even the role of referees, AI is rapidly changing the landscape of sports. Why AI in Sports? We start by addressing the central question: Why talk about AI in sports? The primary reason is the increasing and widespread application of AI-driven technology across all aspects of sport. AI's integration is largely driven by the ability to gather vast amounts of data through wearable devices, health-tracking technologies, and improved camera technology. Computer vision, a subset of AI, plays a crucial role in analysing image data, making it easier to understand the 3D physical activity inherent in sports. Loughborough University in the UK has even established an MSc degree in Sports and Artificial Intelligence , highlighting the growing importance of AI in sports science. A Look at the History We then discussed the history of AI in sports, beginning with the story of Moneyball and the concept of Sabermetrics from the 1970s. The film Moneyball ( based on the 2003 nonfiction book ) illustrates how data-driven decision-making can transform a team's performance by focusing on statistics rather than hype. In 1976, the well-known German sport scientist, Herbert Haag, coined the term “sport informatic", referring to the application of computer science in sports. More recent milestones include the introduction of the Television Match Official (TMO) in rugby in 2001, Hawkeye in cricket and tennis, goal-line technology in football in 2012, and Video Assistant Referee (VAR) in football in 2018. Baseball adopted automated ball-strike calling in 2019. The 2022 Qatar World Cup saw the first use of semi-automated offside technology, employing a physical sensor inside the ball and tracking cameras to determine offsides with high accuracy. In 2024, an AI-powered judge produced a scorecard in a boxing match for the first time, and by 2025, Wimbledon will exclusively use Hawkeye technology, eliminating human umpires. Applications of AI in Sports We then went on to explore the full scope of AI applications in sports today, including: Performance Analysis and Training: AI facilitates the establishment of smart training hubs that track biometrics and athletic performance. High-resolution cameras and computer vision software analyse video feeds to track player movements and provide detailed game analysis. Game Strategy, Tactical Analysis, Scouting and Recruitment: Data tracking, as demonstrated by Moneyball , is now used by nearly all clubs to inform their recruitment processes. Some clubs, like Chelsea Football Club, allow individuals to upload videos of themselves performing specific football moves, using AI as a screening process. Apps like All Athlete enable athletes to upload videos, body measurements, and performance metrics. Interestingly, even esports are emerging as a source for discovering real-world athletic talent. Injury Prevention and Rehabilitation: Companies like Kitman Labs use AI to collect data and identify patterns that can predict potential injuries. GPS trackers, such as Apex , are used in rugby and other sports to measure distance covered, speed, impact, and other metrics, helping to monitor player welfare and detect potential concussions. Smart mouthguards with accelerometers are also being developed to track head impacts. Officiating and Refereeing: AI is increasingly used to support or even replace human officials in making crucial decisions. Hawkeye in tennis is a prime example, and in 2025, Wimbledon plans to rely solely on this technology. While VAR in football has a high success rate ( 99.3% ), its implementation has been controversial. Ethical Considerations The second half of the episode focuses on the “ bad and the ugly" – the ethical considerations surrounding AI in sports. Key concerns include: Loss of the Human Element: Overturning decisions based on AI can diminish the excitement of the game and the role of human referees. Statistics show that many fans find VAR in football makes the experience less enjoyable due to delays and the removal of human interpretation. Competitive Imbalance: Access to AI technology can exacerbate inequalities between clubs, as those with larger budgets can invest more in AI research and development. Errors: Studies have shown that the introduction of AI, such as Hawkeye in tennis, can affect the calls made by human umpires, making them more prone to certain types of errors. Privacy of Data: The collection and use of sensitive biometric data raise privacy concerns for athletes. Looking to the Future We concluded the episode by looking ahead to the future of AI in sports. IBM research indicates that spending on AI technology in sports is set to increase significantly in the coming years. Streaming is changing how people consume sports, with more fans watching online and using apps to access real-time analysis. We raise some key questions about balancing real-time decision-making with accuracy, addressing potential distrust in AI officiating, and ensuring that AI technologies are accessible to smaller leagues. While AI has the potential to push the boundaries of what's possible in sports, it's important to consider whether this translates to a more entertaining experience and how to avoid exacerbating existing inequalities. The increasing demand for digital sports experiences also raises questions about trust in AI systems and the human element of the game. Ultimately, the integration of AI in sports presents both huge opportunities and many challenges, requiring careful consideration of its ethical implications and impact on the spirit of the game. If you enjoyed reading, don’t forget to subscribe to our newsletter for more, share it with a friend or family member, and let us know your thoughts—whether it’s feedback, future topics, or guest ideas, we’d love to hear from you!











