Ransomware whack-a-mole, AI agents as insider threats and how to hack a humanoid robot

The podcast episode discusses the ongoing challenges of ransomware, weak identity security, emerging insider threats from AI agents, and the hacking risks facing humanoid robots, emphasizing that attackers are becoming more sophisticated and leveraging AI themselves. The experts stress the need for multipronged defenses, stronger authentication methods, technical safeguards for AI, and robust security measures for robotics to address these evolving threats.

The podcast episode from IBM’s Security Intelligence covers four major cybersecurity topics: the persistent threat of ransomware, the dangers of weak identity security, the emerging risks of AI agents as insider threats, and the vulnerabilities of humanoid robots to voice-based hacking. The panel, consisting of host Matt Kazinski and experts Michelle Alvarez, J.R. Ralph, and Jeff Kroom, discusses recent industry news and provides practical takeaways for organizations.

The discussion begins with ransomware, highlighting that despite law enforcement successes in 2025, ransomware attacks remain steady. The experts liken the situation to a game of “whack-a-mole” or battling a Hydra—taking down one group only leads to more emerging. The panel emphasizes that ransomware groups have become decentralized, resilient, and evasive, making them harder to eradicate. They stress the need for a multipronged defense strategy, including law enforcement, user education, and leveraging AI for defense, since attackers are increasingly using AI to automate and personalize attacks.

Next, the panel examines a case where a single hacker, “Zestics,” breached 50 corporate cloud environments simply by exploiting stolen credentials found on the dark web. This underscores the ongoing issues of password reuse and the lack of multi-factor authentication (MFA). The experts agree that passwords are fundamentally flawed and advocate for the adoption of passkeys and identity-centric security models. They also note that legacy systems often hinder the transition to stronger authentication, and organizations must modernize to close these security gaps.

The conversation then shifts to the risks posed by AI agents as insider threats. Unlike traditional insiders, AI agents can be overprivileged, act autonomously, and be manipulated through prompt injection attacks. The panel points out that current insider threat programs are mostly focused on human behavior, but organizations now need technical safeguards to govern AI agents’ access and actions. They discuss emerging solutions, such as maintaining a “human anchor” for accountability and implementing technical controls to prevent abuse.

Finally, the episode explores the hacking of humanoid robots via voice commands, demonstrated at GeekCon 2025. This attack combines traditional IT vulnerabilities, AI-specific weaknesses, and physical-world consequences, raising concerns about the convergence of cyber and physical security. The experts warn that as robots become more integrated into daily life, they could become “walking vulnerabilities” if not properly secured. They stress the importance of applying principles like least privilege and human oversight to robotics, and caution against rushing these technologies into widespread use without adequate safeguards.