GPT-5 just caught them *all* (Grok 4.20 and Gemini 3.0)

The video covers the latest AI developments, including the rivalry between Elon Musk’s Grok and OpenAI’s GPT-5, which recently showcased impressive problem-solving by completing Pokémon Red, alongside updates on Google’s Gemini models and AI-focused investments by Leopold Ashen Brener. It also highlights key industry shifts like Igor Babushkin’s move to AI safety research and OpenAI’s gold medal win at the International Olympiad in Informatics, emphasizing both rapid technological progress and growing attention to ethical AI development.

The video begins with the host returning from the AI4 convention in Las Vegas, sharing updates on the latest developments in AI. A notable highlight is the ongoing rivalry between Elon Musk and Sam Altman regarding the prominence of OpenAI’s app on Apple’s charts compared to Musk’s XAI and Grok. Musk promises the release of Grok 4.0 soon, aiming for it to reach the top spot. There are also rumors about Google’s Gemini 3.0, but no concrete evidence supports its imminent release. Meanwhile, Google continues to innovate with new image generation models like Imagine 4 and Gemini 3 with 270 million parameters.

A significant focus is on GPT-5, which recently completed playing Pokémon Red with remarkable efficiency, outperforming previous models like GPT-3. The host highlights the trend of using games like Pokémon Red as benchmarks for AI progress, noting that various AI models, including Claw and Gemini, have also tackled the game. This showcases the rapid advancements in AI capabilities and their growing sophistication in problem-solving and strategic thinking.

The video also covers Leopold Ashen Brener, a young former OpenAI researcher who has launched an AI-focused hedge fund called Situational Awareness. Managing over $1.5 billion, the fund has achieved a 47% return in the first half of the year by investing in companies benefiting from AI advancements, such as semiconductor firms and startups like Anthropic. Brener emphasizes the concept of recursive self-improvement in AI, where AI systems could eventually improve themselves faster than humans, potentially leading to an intelligence explosion—a topic that generates both excitement and concern.

Another key update is the departure of Igor Babushkin, a founding member of XAI, who is shifting his focus to AI safety research. Babushkin reflects on the intense early days of XAI, working closely with Elon Musk to develop their AI models. He expresses a strong commitment to ensuring AI is developed safely and ethically, motivated by concerns about superintelligence and its impact on future generations. Babushkin’s move underscores a broader trend of AI pioneers prioritizing safety and ethical considerations alongside technological progress.

Finally, the video highlights OpenAI’s recent achievement of winning the gold medal at the International Olympiad in Informatics (IOI), demonstrating the AI’s advanced reasoning and problem-solving skills. Although this accomplishment is somewhat overshadowed by other AI milestones, it signifies substantial progress in AI’s ability to tackle complex computational challenges. The host concludes by expressing optimism about the future of AI, the dedication of researchers working on safety, and upcoming interviews with AI insiders to provide deeper insights into the field.