Oxford philosopher Carissa Véliz critiques society’s overreliance on AI-driven predictions, highlighting the ethical risks, biases, and loss of human agency that arise from treating the future as predetermined and blindly trusting algorithmic forecasts. She advocates for AI systems focused on truth rather than mere prediction, warns against the erosion of privacy and democratic values, and emphasizes the importance of humor and unpredictability as essential counterbalances to technological dominance.
In this insightful conversation on the Big Technology Podcast, Oxford philosopher Carissa Véliz discusses the pervasive role of prediction in modern society, especially in the context of AI. She emphasizes that while prediction is often seen as a valuable tool for anticipating the future, it carries significant risks and assumptions, particularly the flawed notion that the future is predetermined. Carissa highlights that the most impactful events in life are often unpredictable, and overreliance on predictive algorithms can obscure this reality, sometimes leading to unfair or harmful outcomes, such as biased job hiring or loan approvals.
The discussion delves into the ethical and practical challenges of using predictive AI in critical areas like the justice system, employment, and finance. Carissa warns about the dangers of self-fulfilling prophecies created by algorithms, where predictions influence outcomes in ways that reinforce biases and reduce fairness. While acknowledging that people can sometimes circumvent algorithmic gatekeeping, she points out that increasing automation limits such agency, potentially sidelining talented individuals who do not fit conventional profiles. The conversation also touches on the opacity of AI decision-making processes, which can alienate individuals and erode accountability.
Carissa draws parallels between contemporary AI-driven prediction and ancient practices like the Oracle of Delphi, suggesting that society has historically grappled with the desire to foresee the future. She praises certain AI applications, such as flood prediction, that have demonstrable benefits, but cautions against blind trust in all predictive models, especially those involving social phenomena where data can be misleading or biased. The conversation also explores the trade-offs between surveillance for safety and the erosion of privacy and democratic freedoms, warning against the slippery slope toward authoritarianism.
The dialogue then shifts to generative AI, with Carissa characterizing large language models as fundamentally designed to please users rather than to seek truth, likening them to “bullshitters” who prioritize engagement over accuracy. While acknowledging ongoing efforts to improve AI grounding in factual information, she remains skeptical about the overall economic and societal value of these systems, citing examples of errors and the costs of correcting them. The conversation underscores the need for AI systems to be designed with truth-tracking as a core goal rather than mere prediction or profit.
Finally, Carissa advocates for a balanced perspective on prediction and technology, emphasizing the importance of humor and the analog world as vital counterweights to digital dominance and gloomy forecasts. She warns against the gamification of life through mechanisms like prediction markets, which can distort public perception and incentivize harmful behaviors. Highlighting the role of comedy in challenging power and fostering democratic discourse, she argues that overreliance on prediction risks stifling innovation and the unexpected, which are essential for cultural and societal growth.