Groq Cofounder Explains Whirlwind Deal With Nvidia

Groq co-founder Jonathan Ross convinced Nvidia CEO Jensen Huang to combine Groq’s specialized Language Processing Units (LPUs) optimized for AI inference with Nvidia’s versatile GPUs, leading to a $20 billion deal that integrated Groq’s technology and talent while allowing Groq to operate independently. This strategic partnership marks a shift in AI hardware towards specialized, heterogeneous computing, positioning Nvidia to lead the market with innovative, cost-effective solutions for AI workloads.

Groq co-founder and CEO Jonathan Ross approached Nvidia CEO Jensen Huang last winter with a pitch to rethink AI data center hardware. Ross argued that training AI models and running inference—the application of AI—require different types of hardware. While Nvidia’s GPUs are versatile “big trucks” capable of handling both tasks, Groq’s specialized Language Processing Units (LPUs) are more like “smaller vans” optimized for fast inference. Ross proposed that the best solution was a combination of both technologies. After a detailed technical discussion, Huang initially ended the meeting but quickly followed up three days later, signaling urgency to move forward.

Within three weeks, Nvidia announced a $20 billion deal to license Groq’s technology and hire most of its staff, effectively merging the companies without the formalities of a traditional merger. This strategic move allowed Nvidia to secure Groq’s innovative inference technology and talent while avoiding potential antitrust complications. Groq continues to operate independently as an LPU cloud provider, with Ross now serving as Nvidia’s chief software architect. Nvidia’s commitment to Groq was further emphasized at its annual developer conference, where the company highlighted Groq’s role in its AI inference strategy.

Financially, the deal was highly lucrative for Ross and key investors, with Ross expected to take home around $950 million in cash and stock, making him a billionaire. The transaction is also significant from a tax perspective, with the U.S. government projected to collect over $6 billion in tax revenue, although Nvidia benefits from substantial tax deductions. Nvidia’s focus on inference chips marks a shift in the AI hardware market, emphasizing cost, latency, and throughput over the one-size-fits-all GPU approach. The integration of Groq’s LPUs with Nvidia’s latest GPUs was announced as a key product for 2026, signaling a new era of heterogeneous AI computing.

Groq’s journey has been challenging, with the company nearly failing multiple times since its founding in 2016. Despite generating only $3 million in revenue against $88 million in losses in 2023, and relatively low revenue even after raising $640 million in mid-2024, Ross remained optimistic about Groq’s potential. The company aimed to capture a significant share of the inference market, which is expected to drive the next phase of AI growth. Nvidia’s endorsement has validated this vision, shifting inference chips from a niche concept to a mainstream, Nvidia-supported technology.

Looking ahead, the success of the Nvidia-Groq partnership depends on how well their combined systems perform at scale. Huang expressed confidence that integrating Groq’s LPUs with Nvidia’s GPUs could unlock substantial new revenue streams and transform AI workloads. While it is still early days, the deal cements Nvidia’s leadership in AI hardware by embracing a more specialized, heterogeneous approach. This strategic move not only strengthens Nvidia’s market position but also signals a broader industry shift towards diversified AI computing solutions.