AI-powered cyberattacks: Adaptive Security CEO Brian Long on the dangers of AI

In a CNBC segment, Brian Long, CEO of Adaptive Security, discussed the rising threat of AI-powered cyberattacks, particularly through deepfake technology, which can convincingly replicate voices and lead to impersonation and fraud. He emphasized the urgent need for companies to implement security measures and train employees to recognize these threats, as deepfake incidents have surged significantly, highlighting the increasing vulnerability of corporate environments.

In a recent segment on CNBC’s Squawk Box, Brian Long, CEO of Adaptive Security, discussed the growing threat of AI-powered cyberattacks, particularly through deepfake technology. Adaptive Security recently secured $43 million in funding, including investment from OpenAI, to enhance its capabilities in simulating AI-driven cyber threats and providing risk assessments for companies. Long demonstrated the alarming potential of deepfake technology by showcasing how easily someone’s voice can be replicated, raising concerns about impersonation and fraud in corporate environments.

Long emphasized that the technology behind deepfakes is rapidly advancing, with current capabilities reaching about 85% accuracy in mimicking voices. He predicts that within a year, this could improve to the mid-90s, making it increasingly difficult to distinguish between real and fake communications. This poses significant risks, as attackers can gather information from publicly available sources to create convincing impersonations, potentially leading to financial fraud or data breaches.

The conversation highlighted the importance of implementing security measures, such as passcodes for sensitive communications, to mitigate the risks posed by deepfake attacks. Long advised individuals and companies to be cautious about sharing personal information and to consider deleting audio voicemails to prevent their voices from being used in scams. He noted that attackers are likely to target middle management rather than high-level executives, as they can more easily obtain the necessary voice samples.

Long also shared alarming statistics, noting that deepfake attacks in the U.S. increased by 17 times over the past year, with over 100,000 incidents reported. He recounted a notable case where a deepfake impersonated a foreign minister and successfully engaged in a conversation with a U.S. senator, illustrating the serious implications of this technology. As awareness of these threats grows, more executives are reporting experiences with such attacks, indicating a rising trend in corporate vulnerability.

To combat these threats, Adaptive Security focuses on simulating potential AI-driven attacks to identify weaknesses within organizations. By training employees on how to recognize and respond to these threats, the company aims to reduce risk and enhance overall security. Long concluded that as deepfake technology continues to evolve, companies must adopt better controls and remain vigilant against the increasing sophistication of cyber threats.