What’s the plan if we get AGI by 2026? – OpenAI cofounder John Schulman

OpenAI co-founder John Schulman discusses the potential of achieving AGI by 2026 and stresses the importance of preparing for this possibility by implementing cautionary measures, such as limiting deployment and coordination among AI entities. Schulman advocates for a gradual approach to AI advancement, focusing on safety and alignment to prevent risks and ensure responsible deployment of highly intelligent AI systems that align with human intentions.

OpenAI co-founder John Schulman discusses the potential scenario of achieving artificial general intelligence (AGI) by 2026 and emphasizes the importance of being prepared for such a possibility. He highlights the need for caution and carefulness in handling AGI, especially if it were to emerge sooner than expected. Schulman suggests measures such as not training even smarter versions of AI, sandboxing deployments, and avoiding large-scale deployments to mitigate risks associated with rapid advancements in AI technology.

In the event of AGI arriving earlier than anticipated, Schulman points out the necessity for coordination among AI entities to establish reasonable limits on deployment and further training. He acknowledges the challenges of game theory and the potential for a race dynamic if entities prioritize staying ahead without considering safety implications. Coordination and setting rules for responsible AI development are seen as crucial steps to prevent a chaotic AI arms race and ensure a safer deployment environment.

Schulman explores the scenario where AI companies pause deployment to assess the situation and coordinates efforts to maintain equilibrium in AI development. He believes that coordinated action among a limited number of entities could prevent widespread misuse of AI technologies and promote a more stable deployment landscape. Schulman envisions a scenario where technical challenges around AI alignment are sufficiently addressed, enabling the safe deployment of highly intelligent AI systems that align with human intentions, leading to prosperity and scientific advancement.

The discussion continues with considerations on how to verify the alignment and safety of AI systems before deployment. Schulman suggests a gradual approach to releasing increasingly advanced AI models while ensuring that each iteration improves safety and alignment. This incremental deployment strategy aims to prevent a sudden buildup of potential risks and allows for monitoring and adjustments as needed. Schulman advocates for a continuous improvement process where AI capabilities are enhanced in parallel with safety and alignment measures to maintain control over AI development and prevent unforeseen consequences.

In conclusion, Schulman expresses a preference for a scenario where AI advancements are managed incrementally, allowing for ongoing improvements in safety and alignment. He emphasizes the importance of monitoring AI development closely to detect any concerning trends and take necessary measures to mitigate risks. Schulman’s vision is for a future where AI technologies are deployed responsibly, enhancing human capabilities while safeguarding against potential misuse or catastrophic outcomes.