Russia and Iran use AI to target US election | BBC News

The BBC News video discusses the rising threat of disinformation in U.S. elections, particularly through the use of generative AI by foreign actors like Russia, Iran, and China, and highlights California’s new law against election-related deep fakes. Experts emphasize the challenges of distinguishing real from fake content, the need for critical media skills among citizens, and the potential of AI tools and chatbots to combat misinformation and engage with conspiracy theorists.

The video from BBC News discusses the growing threat of disinformation amplified by generative AI, particularly in the context of elections. It highlights recent legislation in California, signed by Governor Gavin Newsom, which makes it illegal to create and publish deep fakes related to elections. This law is seen as a potential benchmark for other states, as social media companies will be required to identify and remove deceptive material. The video emphasizes the challenges posed by AI-generated content, which can easily mislead the public and manipulate emotions.

The video features insights from the Microsoft Threat Analysis Center, which monitors foreign interference in U.S. elections. Analysts have detected attempts by Russia, Iran, and China to influence the electoral process, marking the first time all three nations have been observed engaging in such activities simultaneously. The center’s work involves assessing and disrupting cyber-enabled influence threats, with a focus on how these foreign actors are adapting their strategies in response to the changing political landscape in the U.S.

Experts in the video express concern about the increasing difficulty of distinguishing between fact and fiction as AI technology advances. They discuss the potential for watermarking as a solution to identify fake content, but acknowledge that this approach may only be a temporary fix. The conversation also touches on the need for citizens to develop critical media skills to navigate the complex information landscape and recognize manipulation.

The video introduces Dr. Christian Schreit, a researcher at the University of Oxford, who is working on AI tools to detect deep fakes. He explains that while AI can help identify manipulated content, there are still challenges in accurately distinguishing between real and fake images. The discussion highlights the importance of providing explanations for AI assessments, as well as the need for ongoing research to improve detection methods.

Finally, the video explores the use of AI chatbots designed to engage with conspiracy theorists and debunk their beliefs. Dr. Thomas Costello from the University of Washington discusses a successful experiment where a chatbot helped reduce belief in conspiracy theories by providing tailored factual information. The conversation raises questions about the effectiveness of information in changing deeply held beliefs and the potential for AI to play a role in addressing misinformation and disinformation in society.