Who's responsible for AI's military mistakes? | DW News

The DW News video discusses the growing use of AI in military operations, highlighting a dispute involving Anthropic’s AI chatbot Claude and its use by the Pentagon in the Iran conflict, which raises concerns about oversight and accountability for AI-driven decisions. Experts warn that while AI can process vast amounts of intelligence and suggest targets, the lack of transparency and clear legal responsibility for mistakes or civilian casualties creates significant ethical and legal challenges.

A recent dispute in Washington highlights the growing role of artificial intelligence (AI) in modern warfare, particularly in operations related to the Iran conflict. The tech company Anthropic is taking legal action against the Pentagon after being labeled a national security risk, with its AI chatbot, Claude, at the center of the controversy. Reports indicate that the Pentagon has already used Claude to analyze data during military operations against Iran. This situation has sparked debate over how the US military should integrate powerful AI systems into its operations, with supporters emphasizing AI’s ability to process vast amounts of intelligence and critics raising concerns about oversight and accountability.

Craig Jones, a military targeting expert from Newcastle University, explains that AI is currently used in three main ways in the conflict: intelligence analysis, target selection, and wargaming scenarios. For intelligence analysis, AI processes enormous amounts of data from sources like satellite imagery, drone footage, and military databases to identify potential targets. This includes tracking individuals, mobile targets, and missile launchers by analyzing patterns of life and communications within Iran.

In terms of target selection, AI systems recommend preferable military targets based on the data they process. Jones notes that, for example, Israel has used AI to generate hundreds of potential targets per day in the Gaza conflict, including military installations and missile launch sites. While there is technically a human in the loop—meaning a person makes the final decision—the influence of AI on what gets targeted is significant, raising concerns about the extent of human oversight.

The ethical and legal implications of AI-driven warfare are complex and unresolved. Currently, international and domestic laws hold the military commander responsible for decisions that result in civilian casualties or mistakes. However, the lack of transparency in AI algorithms complicates the attribution of responsibility, as even AI developers may not fully understand how their systems make decisions. This ambiguity has led to calls for caution and regulation, but major military powers like the US, Israel, and China continue to advance AI integration in warfare despite these unresolved issues.

Regarding Iran’s capabilities, Jones suggests that much of Iran’s advanced military technology and leadership have been targeted and degraded by US and Israeli operations. He indicates that there is little public information about Iran’s use of AI in warfare, and if such capabilities existed, they have likely been significantly diminished. The conversation underscores the rapid evolution of AI in military contexts and the urgent need for clear ethical and legal frameworks to address the challenges it presents.