Coreweave: AI Bubble Poster Child Or The Next Tech Giant? — With Michael Intrator and Brian Venturo

CoreWeave’s CEO Michael Intrator and CSO Brian Venturo discuss the company’s rapid growth as a leading AI infrastructure provider, highlighting their transition from crypto mining to renting GPU power for major tech firms and their focus on operational excellence and risk management. They address concerns about business sustainability, GPU depreciation, and infrastructure constraints, emphasizing their disciplined approach and confidence in navigating the evolving AI market.

The Big Technology Podcast episode features CoreWeave’s CEO Michael Intrator and Chief Strategy Officer Brian Venturo, discussing the company’s rapid rise amid the AI boom. CoreWeave, now valued at $42 billion after a recent IPO, has become a central player in building AI infrastructure, operating eight new data centers in a single quarter and managing around 250,000 Nvidia GPUs. The founders describe the experience as both exhausting and exhilarating, emphasizing the unprecedented speed and scale at which they are operating. They highlight the privilege and challenge of building foundational infrastructure for what they see as the defining technology of our time.

CoreWeave’s origins lie in providing infrastructure for crypto mining, particularly Ethereum, before pivoting to AI as demand for compute power exploded. The company’s business model centers on renting out high-performance GPU capacity to major tech firms, with Microsoft as a key customer, though no single client now represents more than 30% of their backlog. The founders stress that their competitive edge comes from proprietary software and operational expertise, which allow them to extract maximum value from commodity GPUs and deliver highly reliable, efficient AI compute services. This, they argue, is why leading AI labs and enterprises choose CoreWeave over traditional hyperscalers.

A significant portion of the conversation addresses the risks and realities of CoreWeave’s business model, particularly around debt, customer concentration, and the sustainability of AI demand. The founders explain that most of their infrastructure investments are backed by long-term contracts with creditworthy clients, minimizing risk. They reject the narrative that CoreWeave is simply taking on risk that hyperscalers avoid, arguing instead that all major players are building as fast as possible and that CoreWeave’s approach is disciplined and risk-managed. They also note that if the AI market were to contract, it could present acquisition opportunities rather than existential threats.

The discussion also tackles technical concerns, such as the depreciation and useful life of GPUs. Contrary to claims that GPUs become obsolete within a few years, the founders point out that older generations like Nvidia’s K80s and A100s remain in use for up to a decade, with customers still signing multi-year contracts for them. They argue that the true measure of depreciation is what sophisticated buyers are willing to pay, and current demand for older GPUs remains strong. The conversation further dispels concerns about circular financing with Nvidia, clarifying that Nvidia’s minority investment in CoreWeave is minor relative to the company’s overall capital base and is not indicative of artificial demand.

Finally, the founders address the issue of power and infrastructure constraints in the AI buildout. While power availability is a growing concern, they argue that the current bottleneck is more about construction labor and supply chain limitations than grid capacity. They anticipate that power will become a more significant constraint in the coming years, but also expect ongoing software and hardware innovations to improve efficiency. Overall, the episode paints CoreWeave as a well-positioned, strategically managed company at the heart of the AI infrastructure wave, confident in its ability to navigate both the opportunities and risks of this rapidly evolving market.