AI revenue isn't there and might never come, says NYU professor Gary Marcus

NYU professor Gary Marcus expressed skepticism about the revenue potential of AI, highlighting its current unreliability due to issues like hallucinations and errors, which hinder practical applications. He raised concerns about the direction of AI development, the financial sustainability of AI companies, and the ethical implications of increased surveillance practices.

In a recent discussion, NYU professor Gary Marcus expressed skepticism about the current and future revenue potential of artificial intelligence (AI). He highlighted that, despite optimistic claims from industry leaders like Microsoft’s Satya Nadella regarding AI’s potential to boost GDP by 10%, the reality is that the AI technologies available today are not reliable enough to deliver such benefits. Marcus pointed out that issues like hallucinations and errors in AI outputs remain unresolved, which undermines the technology’s effectiveness in practical applications.

Marcus elaborated on the limitations of current AI models, such as Grok 3, which, despite being significantly larger and more advanced than its predecessor, still exhibits the same fundamental problems. He emphasized that while these models may seem impressive at first glance, deeper scrutiny often reveals subtle errors that can have serious consequences for businesses relying on them. This lack of reliability is a major barrier preventing companies from fully embracing AI technologies.

The conversation also touched on the reasoning capabilities of AI systems, with Marcus arguing that the term “reasoning” is often overstated. He explained that these models do not reason in the traditional sense but instead rely heavily on their training data. When pushed beyond familiar contexts, they tend to make mistakes, particularly in real-world applications. Marcus believes that genuine advancements in reasoning, potentially drawing from classical AI techniques, are necessary before AI can be considered dependable.

Marcus raised concerns about the direction AI development is taking, suggesting that the endgame may lead to increased surveillance. He pointed out that companies like OpenAI are sitting on vast amounts of personal and sensitive data, which could push them toward surveillance practices. He argued that the initial expectations for AI to replace human workers and achieve general intelligence are not materializing, primarily due to the technology’s current limitations.

Finally, Marcus discussed the financial aspects of AI companies, particularly OpenAI, which is projected to have significant revenues but also faces enormous operational costs. He noted that despite the high revenue figures, the economics of running AI models are challenging, with substantial losses expected. This financial strain raises questions about the sustainability of AI ventures and whether they can achieve the profitability that investors anticipate. Overall, Marcus’s insights paint a cautious picture of the AI landscape, emphasizing the need for reliability and ethical considerations in its development.