Claude is being used in lethal military operations

The video reports that Anthropic’s AI model Claude has been used by the US military in lethal operations, despite previous bans and the company’s stated ethical concerns about such use. It highlights the growing tension between tech companies wanting to set limits on military applications of AI and the government’s insistence on ultimate control, calling for a balanced compromise to ensure both security and ethical oversight.

The video discusses the confirmed use of Anthropic’s AI model, Claude, in recent lethal military operations, specifically during the joint US-Israel strike on Iran (codenamed Roaring Lion by Israel and Operation Epic Fury by the US). Multiple reputable sources, including the Wall Street Journal, Axios, and The Guardian, have reported that US Central Command (CENTCOM) used Claude for intelligence assessments, target identification, and simulating battlefield scenarios. This occurred despite a previous directive under the Trump administration that supposedly banned Anthropic’s technology from federal government use. The video emphasizes that Claude is now deeply embedded in military infrastructure, making it difficult to remove quickly.

Anthropic’s public stance on military use of its AI has evolved. Initially, the company stated it did not want its models used for autonomous weapons. More recently, Anthropic clarified that their main concern is the current unreliability of AI for such purposes, citing risks like friendly fire and civilian casualties. Their two “red lines” are: not allowing their AI to be used for autonomous weapons (due to reliability concerns) and not permitting mass domestic surveillance, which they argue would violate fundamental rights. The video highlights the growing power of AI to process and organize vast amounts of personal data, raising new legal and ethical questions about surveillance.

The video also covers the broader context of AI companies’ relationships with the US government. OpenAI, led by Sam Altman, has signed a contract with the Department of Defense for military use of its technology, reportedly with similar safeguards to those Anthropic requested. However, the government has resisted allowing private companies to dictate terms of use, arguing that democratically elected officials—not unelected tech executives—should have ultimate control over military applications. This has led to tensions, with Anthropic facing the threat of being designated a “supply chain risk,” which could severely limit its ability to work with government contractors.

Sam Altman’s response to the situation is highlighted as unusually supportive of Anthropic, despite being a competitor. He has publicly argued against the supply chain risk designation for Anthropic, calling it an overreach that would harm both the industry and national interests. Altman maintains that while competition is important, the safe development and deployment of AI is a higher priority, and he advocates for fair treatment of all major AI labs. The video suggests that Altman’s stance is sincere and not merely a PR move, as he has little to gain from defending a rival.

In conclusion, the video frames the conflict as a complex clash of values and interests between tech companies and the government. While Anthropic’s leadership is portrayed as principled and cautious about the risks of AI, the government insists on retaining ultimate authority over military technology. The video warns against simplistic, black-and-white interpretations of the situation, noting that both sides have legitimate concerns. The best outcome, according to the host, would be a compromise that allows Anthropic to continue contributing to national security without overreaching government retaliation, ensuring that the development and use of powerful AI technologies remain both effective and ethically guided.