Anthropic CEO Dario Amodei stated that the company’s conflict with the U.S. government centers on ethical limits for AI use, particularly opposing mass surveillance and autonomous weapons, despite Pentagon pressure for fewer restrictions. The dispute escalated after reports that Anthropic’s AI aided recent U.S. military actions in Iran, leading to the Pentagon’s decision to phase out the technology, though it remains integral to current operations.
The Wall Street Journal has reported that recent U.S. military attacks on Iran were carried out with the assistance of artificial intelligence tools developed by Anthropic, the company behind the AI assistant Claude. This comes after the Pentagon announced it would end its partnership with Anthropic due to disagreements over how the AI could be used. The Pentagon has declined to comment on the specifics of the situation, but it is clear that Anthropic’s technology has played a significant role in intelligence and cyber operations.
Anthropic’s CEO, Dario Amodei, spoke in an exclusive interview just hours after the Pentagon’s deadline to reach a new agreement with the company. Amodei emphasized that the conflict is about “standing up for what’s right,” highlighting the company’s insistence on placing limits on the use of its AI. Specifically, Anthropic opposes mass surveillance of Americans and the deployment of fully autonomous weapons, while the Pentagon has pushed for unrestricted access to the technology.
The central issue in the dispute is who should have the final say over how powerful AI systems are used by the military. Amodei argued that, as the creators of the technology, Anthropic is best positioned to judge what its models can and cannot do reliably. He framed the disagreement as a matter of principle, suggesting that different companies should be able to offer products under their own ethical guidelines, even when working with the government.
The situation escalated when President Trump and Defense Secretary Pete Hegseth publicly called for all government agencies to stop using Anthropic’s AI, just hours before the U.S. strikes on Iran. Amodei interpreted these statements as retaliatory and punitive. He reiterated that Anthropic’s actions have always been motivated by patriotism and a desire to support U.S. national security, but that the company’s “red lines” are drawn to uphold American values and prevent misuse of their technology.
Although the Pentagon has stated it will phase out Anthropic’s technology over the next six months, the AI remains deeply embedded in classified operations for now. The Wall Street Journal’s reporting underscores the extent to which Anthropic’s tools are currently integrated into U.S. military activities. The ongoing transition away from Anthropic’s AI is expected to take time, and the Pentagon has not clarified whether the technology is still being used in operations related to Iran.