In the video, Carl challenges the widespread belief that AI models will continuously improve with each new generation, highlighting technical limitations, software complexities, and instances where newer models like GPT-5 have not outperformed their predecessors. He warns that this misconception fuels unrealistic investor expectations and calls for a more critical and nuanced understanding of AI progress beyond industry hype.
In this video, Carl, a software professional with over 35 years of experience, discusses what he believes is the biggest lie told by the AI industry: the claim that AI models will only get better with each new generation. He highlights how this narrative is pervasive and often repeated uncritically, even by well-known figures like Hank Green. Carl argues that this lie underpins the massive investment bubble in AI, as investors are led to believe that AI progress is inevitable and exponential, which justifies the high valuations and promises of AI replacing large portions of the workforce.
Carl points to the release of GPT-5 as evidence that contradicts this claim. Despite expectations, GPT-5 has not been unequivocally better than GPT-4; in fact, some benchmarks show it performing worse. OpenAI even had to re-release GPT-4, which undermines the idea that each new model is exponentially better. Carl emphasizes that the AI industry needs exponential improvements to sustain investor confidence, but the reality is more complex, with progress being uneven and sometimes regressive.
He explains that unlike hardware, which has consistently improved in speed and efficiency over decades, AI models are software, and software does not inherently improve with each iteration. Software development involves trade-offs, bugs, and complexities that can cause newer versions to be slower or less efficient. Carl uses examples from well-known software like Windows and Adobe products to illustrate that software updates often introduce new problems or fail to deliver clear improvements, challenging the assumption that AI software will naturally get better over time.
Carl also discusses technical challenges specific to AI, such as “negative transfer” and “catastrophic forgetting,” where improvements in one area can cause regressions in others. These issues make it difficult to create AI models that are universally better across all tasks. While companies may highlight improvements on selected benchmarks, these do not necessarily reflect overall intelligence or capability. OpenAI’s approach of routing requests to different models to mask regressions is described as a temporary fix rather than a solution.
In conclusion, Carl urges viewers not to accept or repeat the lie that AI models will always improve with each generation. He stresses that AI is software, subject to the same limitations and complexities as any other software, and that believing in guaranteed progress can lead to unrealistic expectations and a more problematic internet. He encourages critical thinking and skepticism toward industry claims, especially when they serve commercial interests, and calls for a more nuanced understanding of AI development.