The DW News report highlights how Hungary’s election campaign is using AI-generated videos to spread fear and misinformation, particularly by Prime Minister Viktor Orbán, raising concerns about the impact of AI on democracy and public trust. Experts and EU officials warn that AI-driven propaganda is difficult to regulate and can deeply polarize voters, emphasizing the urgent need for transparency and stronger safeguards.
The DW News report examines the growing risks posed by artificial intelligence (AI) in political campaigns, focusing on Hungary’s current election. Incumbent Prime Minister Viktor Orbán, facing his toughest challenge in 16 years, is using AI-generated videos to stoke fears among voters, particularly around the idea that the European Union (EU) wants to drag Hungary into the war in Ukraine. One such video, featuring a weeping child and a father’s execution, was created with AI and aired without clear disclaimers, illustrating how AI can blur the line between reality and fiction in political messaging.
The report highlights how Orbán’s campaign frames the election as a choice between war and peace, leveraging AI-generated content to reinforce his anti-EU and anti-Ukraine rhetoric. Meanwhile, opposition leader Péter Magyar and his TISZA party are gaining ground, promising improvements in public services and a more constructive relationship with the EU and NATO, though they are careful not to appear as Brussels’ puppets. Polls indicate a strong desire for change among Hungarians, with nearly half wanting a new government, largely due to economic stagnation and neglected public services.
DW’s correspondent in Brussels notes that the EU is closely watching the Hungarian election but is trying to avoid direct confrontation with Orbán to prevent fueling his anti-EU narrative. While Orbán claims international support and touts Hungary’s relationships with Russia and China, these claims are scrutinized, especially as domestic issues like the cost of living and healthcare remain top concerns for voters. The effectiveness of AI-generated propaganda in swaying voters is uncertain, but its potential to polarize and undermine trust in media is clear.
Samuel Woolley, chair of disinformation studies at the University of Pittsburgh, explains that AI has supercharged political misinformation, making it cheaper, faster, and more targeted than ever before. He warns that deepfakes and generative AI content can microtarget specific groups, amplifying propaganda’s emotional impact and potentially influencing election outcomes. Woolley cites examples from other European elections, where AI-generated videos have been used to mislead voters or manipulate public sentiment.
The report concludes by discussing the challenges regulators face in keeping up with AI-driven political content. While the EU has introduced measures like the AI Act and Digital Services Act, enforcement is difficult, and loopholes remain. Woolley emphasizes the need for greater transparency, algorithmic audits, and technological solutions such as watermarking to help identify AI-generated content. He also notes that as AI technology advances, it becomes increasingly difficult for ordinary viewers to spot fakes, underscoring the urgency for robust regulation and public awareness.