OpenAI x Broadcom — The OpenAI Podcast Ep. 8

In this episode of the OpenAI Podcast, OpenAI and Broadcom announce a groundbreaking partnership to develop custom AI chips and computing systems designed to scale AI infrastructure to unprecedented levels, targeting 10 gigawatts of data center capacity by late next year. The collaboration emphasizes vertical integration, innovative AI-driven chip design, and open standards to create efficient, scalable AI compute power essential for serving billions of users and advancing toward artificial general intelligence.

In this episode of the OpenAI Podcast, Andrew Mayne hosts a discussion featuring Sam Altman and Greg Brockman from OpenAI, alongside Hock Tan and Charlie Kawwas from Broadcom. The central announcement is a significant partnership between OpenAI and Broadcom, focused on designing a custom chip and an entire computing system tailored specifically for AI workloads. This collaboration aims to deploy an unprecedented scale of computing infrastructure, targeting 10 gigawatts of data center capacity by late next year, which represents a massive leap in AI compute power to meet the growing global demand for advanced intelligence services.

The conversation highlights the complexity and scale of this endeavor, emphasizing that it goes beyond just chip design to include the entire system architecture, from transistor-level customization to networking and data center deployment. Sam Altman explains that optimizing the full stack—from chip fabrication to the final AI model output—enables significant efficiency gains, resulting in faster, smarter, and more cost-effective AI models. This vertical integration is crucial to scaling AI capabilities to serve billions of users worldwide, as demand for AI-powered applications continues to grow exponentially.

Greg Brockman and Charlie Kawwas discuss the innovative approaches taken in this project, including using AI models themselves to optimize chip design, which accelerates development and uncovers novel efficiencies that human designers might take much longer to find. They also stress the importance of partnerships and collaboration across the industry to build the necessary infrastructure. The team envisions a future where AI agents run continuously for every individual, requiring vast amounts of compute power that current hardware ecosystems cannot fully support, hence the need for custom solutions.

The speakers draw historical parallels to major infrastructure projects like railroads and the internet, underscoring that building AI infrastructure is a long-term, global effort that will take decades to fully realize. They emphasize that AI compute is becoming critical infrastructure, akin to utilities that serve billions of people. The partnership aims to create open standards and scalable platforms that benefit the entire AI ecosystem, accelerating progress toward artificial general intelligence (AGI) and enabling breakthroughs across industries.

Finally, the discussion touches on the future roadmap, with plans to start shipping silicon by the end of next year and rapidly scaling deployment over the following three years. The collaboration is seen as a way to ensure compute abundance, making AI technology accessible and beneficial to all of humanity. Both OpenAI and Broadcom express excitement about the partnership’s potential to push the boundaries of semiconductor technology and AI capabilities, ultimately transforming how intelligence is delivered and utilized globally.