In the podcast, Murray Shanahan discusses the philosophical implications of AI, particularly regarding consciousness and reasoning, emphasizing the evolving nature of AI in society and its impact on human relationships. He critiques the term “intelligence” in the context of AI, advocating for a new vocabulary to better describe AI systems as “exotic mindlike entities” while highlighting their limitations compared to human consciousness.
In the podcast episode featuring Murray Shanahan, a professor of cognitive robotics at Imperial College London and a principal research scientist at Google DeepMind, the discussion revolves around the philosophical implications of artificial intelligence (AI), particularly concerning consciousness and reasoning. Shanahan reflects on the nature of the human mind and consciousness, suggesting that the questions surrounding AI’s capabilities often lead to deeper philosophical inquiries about intelligence itself. He emphasizes that every child born today will grow up in a world where machines can communicate, highlighting the profound impact of AI on society.
Shanahan discusses his involvement in the film “Ex Machina,” which raises critical questions about AI and consciousness. He acknowledges that while the film successfully provokes thought about these topics, it also reflects the evolving nature of AI in popular culture. He contrasts “Ex Machina” with the film “Her,” which he initially found implausible but now recognizes as a more accurate prediction of how humans might form relationships with disembodied AI systems. This shift in perspective underscores the changing landscape of AI and its integration into daily life.
The conversation shifts to the historical context of AI, with Shanahan recalling his interactions with John McCarthy, who coined the term “artificial intelligence.” Shanahan critiques the term “intelligence,” arguing that it is a contentious concept that may not fully capture the essence of what AI systems do. He suggests that terms like “artificial cognition” might be more appropriate, although they lack the same appeal. The discussion also touches on the evolution of AI paradigms, from symbolic AI to neural networks, and the challenges of reasoning within these frameworks.
Shanahan elaborates on the limitations of current AI systems, particularly large language models, in terms of reasoning compared to traditional theorem-proving systems. He notes that while these models can generate responses that appear reasonable, they lack the formal guarantees of correctness that come with hand-coded systems. The conversation highlights the ongoing research into combining symbolic techniques with modern AI approaches to enhance reasoning capabilities, suggesting that the future of AI may involve a hybrid of these methodologies.
Finally, the podcast delves into the complexities of consciousness, distinguishing it from intelligence. Shanahan argues that consciousness is a multifaceted concept that includes awareness, self-awareness, and emotional capacity. He posits that while AI may exhibit certain mind-like qualities, it does not possess consciousness in the same way humans do. The discussion concludes with Shanahan advocating for a new vocabulary to describe AI, referring to them as “exotic mindlike entities” to capture their unique characteristics without overestimating their capabilities. This evolving language reflects the ongoing exploration of AI’s role in society and the philosophical questions it raises.