LlamaCon, Qwen3, DeepSeek-R2 rumors and JP Morgan’s open letter on AI

The episode reviews major AI developments over the past year, including Meta’s LlamaCon and the launch of the Llama API, as well as China’s advancements with Alibaba’s Qwen3 model and efforts to develop independent AI hardware. It also discusses industry concerns about AI safety and governance, prompted by J.P. Morgan’s open letter, emphasizing the need for balanced regulation and security measures to ensure responsible AI deployment.

The episode begins with a reflection on the biggest AI developments over the past year, highlighting how certain innovations that once seemed groundbreaking, like the Kolmogorov-Arnold Networks, turned out to be less impactful than anticipated. The hosts discuss how the cost of AI has plummeted, making advanced AI models more accessible and affordable, which has shifted the focus from raw power to efficiency and usability. They emphasize that many early expectations about AI breakthroughs have been tempered by the rapid evolution of the technology and market dynamics.

The main segment covers the recent LlamaCon event hosted by Meta, which marked its first official conference dedicated to open-source AI models, particularly the Llama series. Key announcements included the launch of the Llama API, a developer platform designed to unify access to Meta’s models, blending open-source flexibility with closed-source reliability. The hosts analyze how this move positions Meta as a more developer-friendly ecosystem, aiming to foster experimentation, fine-tuning, and ecosystem growth around Llama models, while balancing privacy and performance considerations.

Further discussion delves into the Chinese AI scene, focusing on Alibaba’s release of Qwen3, a hybrid model that combines thinking and non-thinking modes. Experts explain that these modes relate to reasoning and quick response capabilities, respectively, and that the hybrid approach aims to optimize performance for different types of tasks. The conversation highlights China’s strategic efforts to develop independent supply chains for AI chips and models, emphasizing the global competition in AI hardware and the importance of innovation outside traditional Western dominance, with China making significant strides in creating efficient, smaller models that outperform larger counterparts.

The episode then shifts to the topic of AI safety and governance, prompted by a letter from J.P. Morgan’s Chief Information Security Officer calling for industry-wide efforts to improve SaaS security amid the proliferation of AI agents. The hosts discuss the challenges of deploying AI at scale, especially in regulated industries, emphasizing the need for robust governance, guardrails, and security-by-design principles. They debate whether more regulation or more AI-driven solutions are the way forward, ultimately agreeing that both approaches are necessary to ensure safe and responsible AI deployment.

In closing, the hosts celebrate the one-year anniversary of the podcast, reflecting on how their initial predictions and discussions have evolved. They revisit early topics like the Rabbit R1 device, which was predicted to be a promising hardware innovation but ultimately proved less impactful, and the speculation around GPT-2 Chatbot, which was thought to be a precursor to more advanced models. They also acknowledge the rapid progress in multi-agent systems and the open-source community, emphasizing how far AI has come in just a year. The episode ends with a humorous look back at their first predictions, highlighting the fast-paced and unpredictable nature of AI development.