AI is Slowing Down! What does this mean? — Gary Marcus and Narrowing Status Games — Follow the Money

In the video, David Shapiro discusses the potential slowdown of progress in artificial intelligence (AI), highlighting a possible deceleration in the rate of acceleration rather than a complete halt. Shapiro explores the implications of this slowdown on job security, societal stability, and the evolving landscape of AI development, emphasizing the need for broadening narratives and considering alternative viewpoints in the AI discourse to navigate these changes.

In the video, David Shapiro discusses the potential slowdown of progress in artificial intelligence (AI). He emphasizes that while AI is not stalling, the rate of acceleration may be deteriorating. This slowdown could have positive implications for safety, pushing back the predicted singularity and giving more time to understand the complexities of human intelligence. Shapiro suggests that the combination of neural connections, electromagnetic waves, and quantum effects may play a role in human consciousness and intelligence, indicating a broader understanding of intelligence than previously thought.

The video also delves into the impact of AI slowing down on job security and stability. Shapiro notes that a slower rate of change in job markets could provide more time for adaptation and the creation of new jobs, potentially extending the existing status quo. However, he acknowledges that reactions to this news may vary, with some individuals ready for AI to replace their jobs and others preferring the stability of the current employment landscape. Shapiro speculates that significant changes driven by AI may occur around 2027-2030, depending on factors like the adoption of new technologies like GPT-5 and robotics.

Shapiro reflects on his past prediction of achieving artificial general intelligence (AGI) by September 2024 and the factors influencing his revised outlook on AI progress. He highlights the exponentially rising costs of training new AI models as a critical factor that could lead to diminishing returns in AI development sooner than anticipated. Shapiro observes that advancements in AI intelligence are becoming more challenging due to economic factors and increasing competition in the field, such as the emergence of models like Claude 3.5 surpassing existing ones.

Additionally, Shapiro touches on the concept of echo chambers within the AI community, where differing perspectives and interpretations of AI progress lead to heated debates and conflicts among prominent figures like Gary Marcus, Yash shabach, and Yan laon. He attributes these conflicts to a tightening status game in the AI field, where individuals vie for social status based on their AI-related narratives. Shapiro suggests that the narrowing of the status game and competition for recognition could contribute to increased vitriol and disagreements among AI commentators.

In conclusion, Shapiro underscores the importance of broadening narratives and considering alternative viewpoints in the AI discourse to navigate the evolving landscape of AI development. He cautions against becoming entrenched in echo chambers and emphasizes the benefits of a more stable societal transition as AI progress potentially slows down. Shapiro remains optimistic about the potential of GPT-5 and robotics to surprise and impact various industries, predicting a future where AI augmentation may replace certain job functions but not disrupt the entire economy.