Why I don’t think AGI is right around the corner

The speaker argues that despite impressive current AI capabilities, true Artificial General Intelligence (AGI) is not imminent due to fundamental limitations like the lack of continual learning and adaptability in large language models. He predicts that significant breakthroughs, particularly in integrating continual learning, are needed before AI can perform complex, human-like tasks autonomously, estimating such progress around 2032 while emphasizing the importance of balanced expectations for AI’s future impact.

In this video, the speaker shares his perspective on why Artificial General Intelligence (AGI) is not imminent, despite the excitement and varying predictions about its arrival. He acknowledges the impressive capabilities of current large language models (LLMs) but points out their fundamental limitations, particularly their inability to learn and improve continuously like humans do. This lack of continual learning means that while LLMs can perform many tasks moderately well, they cannot adapt or build deep contextual understanding over time, which is crucial for truly transformative economic impact and human-like labor.

The speaker draws on his personal experience using LLMs for tasks such as rewriting transcripts, identifying clips, and co-writing essays. He notes that these models perform at about a 5/10 level on such straightforward tasks and do not improve with feedback in the way humans naturally do. Unlike human workers who learn from mistakes and refine their skills through practice and feedback, LLMs rely on static abilities defined at training time, with only limited fine-tuning possible. This fundamental difference limits their usefulness in complex, real-world workflows where adaptability and ongoing learning are essential.

He also discusses the challenges of developing AI agents capable of complex, multi-step tasks like managing computer use or doing taxes end-to-end. These tasks require long time horizons, multimodal data processing, and extensive training data, which are currently lacking. The speaker is skeptical of optimistic forecasts that such agents will be reliable within the next few years, citing the slow progress in related areas and the difficulty of creating effective reinforcement learning environments for these tasks. He believes we are still in the early stages, comparable to the GPT-2 era for computer use, and that significant breakthroughs are needed before AI can handle such responsibilities autonomously.

Looking ahead, the speaker is cautiously optimistic about the long-term future of AI. He predicts that once continual learning is effectively integrated into AI systems, there could be a rapid and broad deployment of intelligent agents capable of learning on the job and improving over time, potentially leading to a form of intelligence explosion. However, he estimates this milestone to be around 2032, emphasizing that while the timeline is uncertain, the progress needed to reach human-like adaptability and learning in AI will take several more years of research and development.

Finally, the speaker reflects on the broader implications of AI progress, noting that the current rapid scaling of compute and data cannot continue indefinitely. Future advances will rely more on algorithmic innovation, which is likely to slow down as the easiest improvements are exhausted. This suggests that while transformative AI might not arrive imminently, the coming decades could still see profound changes driven by AI. He encourages a balanced view that prepares for both the challenges and opportunities ahead, and invites viewers to follow his ongoing analysis and discussions on his blog and podcast.