The “Mixture of Experts” episode reviews major AI news from CES 2026, including Disney’s landmark licensing deal with OpenAI, Time Magazine’s focus on AI business leaders, Nvidia’s launch of the Nemotron 3 open-source model, and Anthropic’s innovative Claude Soul document for model alignment. The panel discusses how these developments signal a shift toward greater integration of AI in entertainment, business, and ethical frameworks, highlighting the growing influence of corporate interests and the evolving landscape of AI governance and creativity.
The episode of “Mixture of Experts” covers major AI highlights from CES 2026, focusing on the Disney-OpenAI licensing deal, Time Magazine’s Person of the Year, Nvidia’s Nemotron 3 launch, and Anthropic’s Claude Soul document. The panel, featuring Tim Hoang, Martin Keen, Marina Danielewski, and Kush Varshni, begins by discussing the Disney-OpenAI partnership. Disney is entering a three-year licensing agreement with OpenAI, allowing its characters and intellectual property to be used in generative AI models like Sora. Disney is also taking a billion-dollar equity stake in OpenAI. The panel notes this is a strategic move for Disney to control fan-generated content and keep it within Disney’s ecosystem, rather than letting it proliferate on external platforms.
The conversation shifts to the broader implications of this deal for the entertainment industry and creators. Disney’s willingness to license its IP for generative AI marks a significant shift in how major content owners approach AI and fan creativity. The panel speculates that other IP holders may follow suit, potentially leading to a new wave of licensing deals and platform exclusivity. They also discuss the changing social contract around authorship and creativity, as AI-generated fan content becomes more mainstream and integrated into official channels.
Next, the panel discusses Time Magazine’s decision to name the “Architects of AI” as Person of the Year. They observe that the cover features CEOs and infrastructure providers rather than researchers, highlighting the dominance of business, hype, and financial interests in the current AI landscape. The discussion reflects on how 2025 has been more about AI hype and business deals than technical breakthroughs, with massive investments in AI infrastructure and data centers. The panel draws parallels to previous technological revolutions, noting that the focus has shifted from technical innovation to the business and cultural impact of AI.
The episode then covers Nvidia’s launch of Nemotron 3, its latest open-source AI model family. The panel debates why Nvidia, despite its dominance in AI hardware, hasn’t led in model development. They note that Nvidia is moving up the stack, integrating hardware, software, and models, and that the quality of open-source models is converging across the industry. The discussion also touches on the increasing expectations for openness in model releases, including transparency about training data and reinforcement learning libraries, especially with upcoming regulations like the EU AI Act.
Finally, the panel examines Anthropic’s Claude Soul document, a manifesto-like guide used during model fine-tuning to instill values and behavioral guidelines in the Claude AI model. Unlike typical safety documents, the Soul document is more philosophical and narrative-driven, aiming to embed a consistent “personality” and ethical framework into the model. The panel discusses the technical and philosophical implications of this approach, including its impact on model alignment, evaluation, and user experience. They conclude by reflecting on the future of prompting and model alignment, predicting that as AI systems become more complex, new methods for guiding model behavior will emerge, moving beyond the current reliance on prompts.