The arrival of AGI | Shane Legg (co-founder of DeepMind)

Shane Legg, co-founder of DeepMind, discusses the concept of Artificial General Intelligence (AGI) as a spectrum of AI capabilities matching and eventually surpassing human cognitive abilities, emphasizing the importance of rigorous testing, ethical reasoning, and interdisciplinary collaboration to manage its profound societal impacts. He predicts a 50% chance of achieving minimal AGI by 2028, highlighting both the immense opportunities and significant challenges it presents for the future.

In this insightful conversation on the Google DeepMind podcast, Shane Legg, co-founder and chief scientist of DeepMind, discusses the concept and arrival of Artificial General Intelligence (AGI). Legg defines minimal AGI as an artificial agent capable of performing the range of cognitive tasks that a typical human can do. While current AI systems demonstrate remarkable abilities, such as multilingual communication and vast general knowledge, they still fall short in areas like continual learning and complex reasoning, especially visual reasoning. Legg emphasizes that AGI should be viewed as a spectrum rather than a strict threshold, with minimal AGI marking the point where AI matches typical human cognitive abilities, and full AGI encompassing the entire range of human intellectual capabilities.

Legg also explores the challenges of defining AGI and the confusion caused by varying interpretations. He explains that the term originally referred to a field of study focused on building broadly capable AI systems, but over time it has come to denote a category of AI artifacts. Different people have different benchmarks for AGI, ranging from passing comprehensive exams to economic benchmarks like generating significant profit. Legg advocates for a rigorous approach involving a broad suite of cognitive tasks that AI must pass to be considered AGI, followed by adversarial testing to identify any remaining cognitive failures. He predicts that as AI systems become more generally capable, the term AGI will become more universally accepted and less contentious.

Ethics is a major theme in the discussion, with Legg highlighting the importance of embedding ethical reasoning within AI systems. He describes a “system two safety” approach, inspired by human dual-process thinking, where AI engages in deliberate, logical reasoning about ethical dilemmas rather than relying on instinctive responses. This approach could enable AI to make more consistent and potentially superior ethical decisions compared to humans. However, grounding AI ethics in human values is complex due to cultural differences and the challenge of ensuring AI systems remain safe and reliable as they become more capable. Legg stresses the need for ongoing testing, monitoring, and interpretability to manage risks, including preventing misuse in areas like weapon development or hacking.

Looking ahead, Legg foresees a massive societal transformation driven by AGI and superintelligence, which will surpass human cognitive capabilities by orders of magnitude. He compares this to how machines have already outperformed humans in physical domains and predicts that AI will similarly exceed human intelligence in many cognitive areas. This transformation will disrupt labor markets, especially in cognitive professions that can be performed remotely, while some jobs requiring physical presence or unique human traits may be less affected. Legg calls for broad interdisciplinary engagement to understand and shape the implications of AGI across education, law, economics, and other fields, emphasizing the need to rethink societal structures and wealth distribution in a post-AGI world.

Finally, Legg shares his consistent prediction that there is a 50% chance of achieving minimal AGI by 2028, with full AGI likely within a decade thereafter. He expresses optimism about the enormous opportunities AGI presents, likening it to a new industrial revolution that could vastly increase wealth, scientific progress, and human flourishing. However, he acknowledges the profound challenges in managing risks and ensuring ethical development. Legg urges more people from diverse disciplines to engage seriously with these questions, as the arrival of AGI is no longer a distant speculation but an imminent reality demanding urgent and thoughtful attention.