Mudahar discusses the rapidly worsening AI bubble, highlighting how high costs from third-party AI agent usage and resource-intensive operations are making companies like Anthropic struggle with profitability, leading to an unsustainable financial model. He advocates for open-source and local AI solutions as a more sustainable alternative, emphasizing that the current hype and pricing models are likely to burst, necessitating a reevaluation of AI service monetization for long-term accessibility and sustainability.
In this video, Mudahar discusses the rapidly worsening state of the AI bubble, focusing on recent developments with the AI company Anthropic and its product Claude. He highlights a significant source code leak and the company’s crackdown on third-party API token usage, which has caused frustration among users. Anthropic had been offering relatively affordable subscription plans, but the rise of third-party “harnesses” like OpenClaw, which allow AI agents to perform complex tasks on users’ computers, has driven up usage costs dramatically. This unsustainable financial model is causing companies like Anthropic to struggle with profitability.
Mudahar explains how these AI harnesses work by combining large language models (LLMs) with agents that can control a computer’s functions, such as downloading files or searching the internet. He demonstrates running an agent locally on his powerful PC, showing how resource-intensive these operations are. The AI prompts involve processing thousands of tokens, which translates into high computational costs and heavy GPU usage. This local example serves to illustrate why cloud-based AI services face enormous expenses, especially as more users adopt agentic workflows that demand continuous and complex AI interactions.
The video also touches on the broader implications of this technology trend. Many companies are attempting to replace human workers with AI agents running 24/7, hoping to cut costs. However, the high API usage fees and the imperfect nature of AI outputs—leading to costly mistakes—make this approach financially risky. Mudahar points out that while AI is here to stay, the current hype and pricing models are unsustainable, and the bubble is likely to burst, dragging down many companies and forcing a reevaluation of how AI services are offered and monetized.
Mudahar advocates for the use of open-source AI models and local AI setups as a more sustainable alternative to relying on expensive cloud services. He demonstrates how running AI locally can provide privacy, security, and cost control, even though it requires significant hardware investment. He mentions emerging open-source projects like Quen 3.5 and Google’s Gemma 4, which are designed to run efficiently on consumer devices, potentially reducing dependence on centralized AI providers. This shift could democratize AI access and alleviate some of the financial pressures on big tech companies.
In conclusion, Mudahar emphasizes that while AI technology is powerful and useful, the current economic and computational realities reveal a bubble that is bursting faster than expected. The high costs of running AI at scale, driven by agentic applications and heavy usage, are unsustainable for many companies. He encourages viewers to understand these challenges, explore local AI solutions, and be cautious about the hype surrounding AI’s future. Ultimately, he sees this correction as necessary for the technology to mature and become more accessible and sustainable in the long term.