OpenAI o3, DeepSeek-V3, and the Brundage/Marcus AI bet

In a recent episode of “Mixture of Experts,” the panel discussed the state of deep learning, focusing on OpenAI’s o3 model and DeepSeek-V3, with varying opinions on whether deep learning is hitting a wall. The conversation also addressed AI governance challenges and a public bet between AI skeptic Gary Marcus and advocate Miles Brundage regarding future AI capabilities, emphasizing the need for realistic expectations and understanding of AI limitations.

In a recent episode of “Mixture of Experts,” host Tim Hwang and a panel of experts discussed the current state of deep learning and the release of significant AI models, including OpenAI’s o3 and DeepSeek-V3. The panel featured Chris Hay, Kush Varshney, and Kate Soule, who shared their perspectives on whether deep learning is hitting a wall. Chris expressed skepticism, claiming that models are getting worse, while Kush acknowledged a wall but suggested it is surmountable. Kate, on the other hand, argued that new applications of deep learning in 2025 would yield interesting benefits, indicating a more optimistic outlook.

The conversation shifted to OpenAI’s o3 model, which was announced during their “12 Days of OpenAI” marketing event. The o3 model reportedly outperformed traditional benchmarks, reigniting discussions about the potential for progress in AI. Chris shared his excitement about the model’s capabilities, particularly in coding, while expressing frustration over limited access in Europe. Kate elaborated on the innovations in the o3 model, emphasizing a shift towards spending more compute resources on inference time rather than training time, which could lead to improved performance and efficiency.

The panel then discussed the implications of the DeepSeek-V3 release, an open-source model from China that demonstrated impressive performance at a lower cost than expected. Chris highlighted the innovative techniques used in DeepSeek’s pre-training process, suggesting that the focus on pre-training might shift towards fine-tuning and inference time compute in the future. Kate noted that the model’s mixture of experts architecture allows for efficient operation, activating only a fraction of its parameters during inference, which could lead to further advancements in AI efficiency.

The discussion also touched on the challenges of AI governance, particularly in light of the rapid advancements in AI technology. Kush emphasized the need for global cooperation in AI governance, as laws in one country may not effectively regulate AI developments in another. The panel acknowledged that while larger models may be easier to govern, smaller models could pose significant risks, especially when used in autonomous applications.

Finally, the episode concluded with a conversation about a public bet between AI skeptic Gary Marcus and AI advocate Miles Brundage regarding the future capabilities of AI. The panel debated the validity of the bet’s criteria, with Chris dismissing it as unrealistic and Kate expressing concerns about the ongoing issue of hallucinations in AI models. The discussion highlighted the evolving expectations of AI capabilities and the importance of framing these discussions in a way that resonates with the general public, as well as the need for a deeper understanding of AI’s limitations and potential.