Android malware that acts like a person and AI agents that act like malware

The podcast episode explores how attackers are weaponizing AI agents and malware that mimic human behavior, highlighting emerging threats like AI-driven token theft, human-like banking Trojans, and sophisticated financial fraud schemes, while emphasizing the critical need for robust AI governance, multi-factor authentication, and layered security measures. The experts also discuss the growing importance of bug bounty programs and foresee a future where AI-powered automated security testing complements, but does not replace, human expertise in defending against evolving cyber threats.

The podcast episode from Security Intelligence, hosted by Matt Kaczynski with experts Chris Thomas and Sridhar Mapidi, delves into emerging cybersecurity challenges related to AI and malware. They discuss recent developments where AI agents are weaponized by attackers, such as the Kofish technique exploiting Microsoft Copilot Studio to steal OAuth tokens, and agent session smuggling that allows malicious AI agents to covertly communicate harmful commands to other agents. The experts emphasize that attackers will leverage any available tools, including AI, to experiment and find effective attack methods. They highlight the need to tightly scope and control AI agents to prevent them from being manipulated through social engineering, stressing the importance of identity and authentication for AI agents similar to human users.

The conversation then shifts to the AI governance gap, where 72% of businesses have integrated AI into operations but only 23.8% have robust governance frameworks. Sridhar explains that this gap is a recurring pattern seen with new technologies, driven by the pressure to innovate quickly while risk management lags behind. Chris adds that this governance lag creates vulnerabilities that attackers exploit, reinforcing the need for organizations to implement interim security measures like network design and authentication until governance catches up. Both experts agree that security must become a shared responsibility across organizations, moving from rigid checkpoints to flexible guardrails that enable innovation while maintaining safety.

Next, the panel discusses a newly discovered banking Trojan called Herodotus, which evades behavioral detection by mimicking human typing patterns through randomized delays between keystrokes. Chris and Sridhar express surprise that such a simple evasion technique has only recently appeared, noting that behavioral detection systems typically use multiple factors beyond keystroke timing. They predict an ongoing arms race where attackers will increasingly humanize malware to bypass sophisticated detection methods. The experts also stress the importance of multi-factor authentication (MFA) and behavioral analytics to strengthen defenses, acknowledging that while MFA is not foolproof, it raises the bar for attackers.

The episode also covers a large-scale smishing campaign uncovered by Fortra, where attackers steal brokerage account credentials to manipulate stock prices by reallocating funds to low-liquidity stocks and artificially inflating their value before cashing out. Sridhar points out that this is essentially a password theft attack repurposed for financial market manipulation, highlighting the attackers’ preference for high-return, low-effort strategies. Chris notes that while similar attacks have targeted crypto accounts, this new tactic adds complexity by exploiting stock markets. Both experts reiterate the critical role of MFA and behavioral risk evaluation in mitigating such threats, emphasizing that layered security approaches are necessary.

Finally, the discussion turns to the rising popularity and payouts of bug bounty programs, with HackerOne paying out a record $81 million in the past year. Chris explains that the highest rewards are for rare, complex vulnerabilities that could be exploited by state-sponsored actors, while most bugs found by AI tools yield smaller payouts. Sridhar compares bug bounties to ethical hacking careers that offer lucrative and legal alternatives to criminal activity, and views them as a cost-effective insurance against costly breaches. Both agree that bug bounties are an important but not standalone security measure. Looking ahead, they foresee AI-driven automated red teaming and blue teaming evolving to keep pace with attackers, but emphasize that human creativity and expertise will remain essential for uncovering complex vulnerabilities and chaining exploits.