How AI is being used in war in 2026 | DW News

The video examines how AI is transforming modern warfare by enabling rapid target identification, scenario simulation, and real-time intelligence fusion, while also raising ethical concerns due to its use in surveillance and autonomous weapons. It highlights controversies involving major AI companies like Anthropic and OpenAI, emphasizing the urgent need for clear regulations and accountability as the line between government and corporate control blurs.

The video explores the increasing role of artificial intelligence (AI) in modern warfare, focusing on recent US military operations such as Operation Epic Fury in Iran and the capture of Nicolas Maduro in Venezuela. AI has become central to military strategy, functioning like an advanced version of Google Maps for war. It assists in identifying targets by analyzing vast amounts of satellite and drone footage, simulating millions of potential scenarios to predict enemy responses, and fusing intelligence from multiple sources to provide a comprehensive, real-time picture for decision-makers. This has dramatically accelerated military planning and operations.

Beyond the battlefield, AI is also being used for large-scale public surveillance. For example, US Immigration and Customs Enforcement (ICE) employs similar technologies to track immigrants. This overlap between government surveillance and private AI companies has sparked global controversy, especially regarding the ethical implications and potential for abuse. The Iran conflict has brought these issues to the forefront, raising questions about who controls and powers the AI systems used in warfare.

The video highlights the role of Anthropic, the US company behind the Claude AI system, which reportedly supported military actions in Iran and Venezuela. Anthropic has set strict boundaries: its AI cannot be used for mass domestic surveillance in the US or to operate fully autonomous weapons without human oversight. These guardrails led to a fallout with the Trump administration, which subsequently banned all federal agencies from using Anthropic’s technology.

Following Anthropic’s blacklisting, OpenAI quickly signed a major deal with the Department of Defense. However, critics pointed out that OpenAI’s initial safeguards were vague and potentially full of legal loopholes. This led to a public backlash, with a surge in ChatGPT uninstalls and a migration to Anthropic’s Claude. In response, OpenAI amended its agreement to explicitly ban mass domestic surveillance and unmonitored AI use in autonomous weapons, acknowledging the need for clearer ethical standards.

The video concludes by emphasizing the urgent need for transparent and enforceable regulations on AI in warfare, not just for governments but also for the corporations developing these technologies. While proponents argue that AI can reduce human error and collateral damage, the risks are significant—especially when AI systems make mistakes or escalate conflicts, as seen in simulations where AI models frequently resorted to nuclear threats. The blurring line between government and corporate control over AI in warfare underscores the necessity for global standards to ensure public safety and accountability.