Google's AI Found An Exploit On Its Own

The video explains a critical use-after-free vulnerability in Google Chrome’s Angle component, illustrating how improper memory management can lead to security risks, and highlights that this bug was discovered by Google’s AI tool, Big Sleep, showcasing AI’s emerging role in vulnerability detection. It also discusses the challenges of AI-assisted security research, such as false positives and large codebases, while encouraging viewers to build foundational low-level programming skills to better understand and address such vulnerabilities.

The video discusses a critical security vulnerability, CVE-2025-9478, found in Google Chrome’s Angle component, which is responsible for interfacing with the GPU to render 2D graphics via WebGL. The vulnerability is a use-after-free bug, a type of memory corruption where a program continues to use memory after it has been freed, leading to potential crashes or exploitation. The presenter explains the concept of use-after-free with a simple example involving two structures, “cat” and “dog,” demonstrating how type confusion can occur when freed memory is reused improperly, potentially allowing attackers to leak memory or cause crashes.

The presenter walks through a basic code example illustrating how the vulnerability arises due to missing checks after deleting objects. By creating and deleting objects in a specific sequence, the program mistakenly treats a freed pointer as still valid, leading to a crash when it tries to access invalid memory. This example helps viewers understand the mechanics behind use-after-free bugs and why they are challenging to detect and exploit. The video emphasizes the importance of proper memory management, such as nullifying pointers after deletion, to prevent such vulnerabilities.

A significant highlight of the video is that this particular Chrome vulnerability was discovered by Google’s internal AI tool called Google Big Sleep, developed through collaboration between Google DeepMind and Project Zero, Google’s elite security research team. This marks a notable advancement in security research, showcasing how AI can assist in identifying complex bugs that are difficult for humans to find. The presenter notes that AI-driven vulnerability discovery is likely to become more prevalent, although the technology is still evolving and has limitations.

The video also touches on the challenges of using AI for security research, particularly the difficulty of managing large codebases within AI’s context window and the high rate of false positives. The presenter references other research, such as Sean Healin’s work on AI-assisted vulnerability discovery in the Linux kernel, to illustrate that while AI can generate many potential bug reports, only a small fraction are valid. This low signal-to-noise ratio means human researchers still need to carefully triage AI findings, making the process resource-intensive.

Finally, the presenter encourages viewers interested in low-level programming and security research to deepen their understanding of computer fundamentals by learning languages like C and assembly. He promotes his own educational platform, Low Level Academy, which offers courses designed to build foundational skills necessary for understanding and exploiting vulnerabilities like use-after-free. The video concludes by emphasizing the growing role of AI in security research and inviting viewers to engage with the content and explore further learning opportunities.