OpenAI is highly overvalued and DeepSeek just blew up their business model, says NYU's Gary Marcus

Gary Marcus, an NYU professor and AI expert, criticized OpenAI’s $157 billion valuation, citing its significant losses and the emergence of DeepSeek, which offers similar services for free, as threats to OpenAI’s business model. He emphasized concerns about the reliability of large language models and suggested that the democratization of AI technology could lead to increased competition and a potential decline in OpenAI’s market position.

In a recent discussion, Gary Marcus, a professor at New York University and an AI expert, expressed his skepticism about OpenAI’s valuation and business model. He argued that OpenAI is highly overvalued at $157 billion, especially considering its significant losses of around $5 billion per year. Marcus highlighted that the recent emergence of DeepSeek, which offers similar services for free, has undermined OpenAI’s business strategy, suggesting that the competitive landscape is shifting. He believes that DeepSeek’s approach is more open than OpenAI’s, which could attract talent away from the latter.

Marcus pointed out that the reliability of large language models (LLMs), which are central to OpenAI’s offerings, is a significant concern. He noted that these models are prone to hallucinations and errors, which diminishes their utility. He speculated that while OpenAI may currently be a leader in the field, there is potential for other companies or innovations to emerge that could provide more effective solutions than LLMs. He emphasized that the current trajectory suggests a possible decline in OpenAI’s valuation, akin to the downfall of WeWork.

When asked about the credibility of DeepSeek’s claims, Marcus expressed cautious optimism. He mentioned that the details provided by DeepSeek seem plausible and that independent replication of their results has already begun. He believes that if DeepSeek’s claims are indeed valid, it could signify a significant advancement in AI optimization. Marcus reiterated that the commoditization of AI technology is underway, and he had previously predicted that LLMs would become widely accessible, diminishing any competitive advantage.

The conversation also touched on the geopolitical implications of these developments in AI technology. Marcus noted that the recent advancements indicate that AI is becoming more accessible to a broader range of players, including smaller countries and companies. He argued that the traditional notion of the U.S. having a monopoly on AI resources is being challenged, as the barriers to entry are lowering. This democratization of AI technology could lead to a more competitive global landscape.

Finally, Marcus concluded that achieving true artificial general intelligence (AGI) will require significant breakthroughs beyond the current capabilities of LLMs. He stressed that merely refining LLMs will not lead to a sustainable competitive advantage. Instead, he suggested that innovation should be fostered through targeted sponsorship and support for new ideas, rather than relying on existing frameworks like the Chips Act or export controls, which may not effectively manage the rapid evolution of AI technology.