In the “Mixture of Experts” episode, host Tim Hwang and guests discussed the evolving role of AI in research, debating the crediting of AI as co-authors and the implications of OpenAI’s recent product announcements, including the Deep Research feature and o3-mini. They also explored Anthropic’s constitutional classifiers, the upcoming AI Action Summit, and the importance of interdisciplinary approaches in AI development, emphasizing the need for human oversight and diverse perspectives.
In a recent episode of “Mixture of Experts,” host Tim Hwang and guests Marina Danilevsky, Chris Hay, and Nathalie Baracaldo discussed the evolving role of AI in research and the implications of recent announcements from OpenAI and Anthropic. The conversation began with a debate on whether AI should be credited as co-authors or assistants in research. While Marina advocated for transparency in crediting AI as assistants, Chris humorously suggested that if AI is credited, then calculators should be too. Nathalie emphasized the importance of provenance in research, arguing that understanding the sources of data is crucial to avoid biases and ensure comprehensive analysis.
The discussion then shifted to OpenAI’s recent product announcements, including the Deep Research feature and the release of o3-mini. The Deep Research feature aims to assist users in compiling research reports but has faced criticism for its lack of polish and effectiveness. Chris noted that while the feature is a fun experiment, it may not impress those familiar with existing agent frameworks. The panelists agreed that OpenAI is under competitive pressure from other companies like DeepSeek, prompting them to release products more rapidly. Nathalie raised concerns about the potential for AI-generated research to create filter bubbles, emphasizing the need for diverse perspectives in research outputs.
The conversation also touched on the upcoming AI Action Summit hosted by the French government, which aims to address the social, cultural, and economic impacts of AI. Marina expressed skepticism about the effectiveness of such international gatherings in producing meaningful outcomes, while Nathalie remained optimistic about the potential for fruitful discussions and collaborations. The panelists acknowledged the importance of having diverse voices at these summits to ensure a comprehensive understanding of AI’s implications across different regions and cultures.
Anthropic’s announcement of constitutional classifiers was another focal point of the discussion. The panelists examined the concept of constitutional AI, which involves creating a set of guiding principles for AI behavior. Chris pointed out that while the idea of guard models is not new, Anthropic’s approach shows promise in addressing jailbreak vulnerabilities. Nathalie highlighted the importance of red teaming in testing these models, noting that the success of the classifiers depends on their ability to withstand universal jailbreak attacks.
Finally, the episode concluded with a discussion about Microsoft’s new Advanced Planning Unit (APU), which aims to incorporate social science perspectives into AI development. Marina expressed enthusiasm for the interdisciplinary approach, while Nathalie emphasized the necessity of human involvement in understanding the societal implications of AI. Chris provocatively suggested that AI could automate much of this work, but the panelists ultimately agreed on the importance of maintaining human oversight and interaction in the development and application of AI technologies.