The Pentagon has blacklisted AI company Anthropic as a supply chain risk, barring it from military contracts, after a leaked internal memo criticized the government and competitor OpenAI. Anthropic’s CEO, Dario Amodei, apologized for the memo’s tone and announced plans to legally challenge the decision, warning that removing their technology could significantly disrupt military operations.
The Pentagon has recently escalated tensions with the artificial intelligence company Anthropic by officially designating it as a supply chain risk, effectively blacklisting the company from any commercial activity with the U.S. military or its contractors. This unprecedented move means that no supplier, contractor, or partner working with the U.S. military can do business with Anthropic, a significant blow to the company’s operations and reputation. The designation was announced by Secretary of Defense Pete Hegseth and marks the first time a U.S. company has received such a label.
In response to this development, Anthropic’s CEO, Dario Amodei, has issued a public apology for the tone of a leaked internal memo that recently surfaced. The memo, first reported by The Information, claimed that the real reason the Department of Defense and the administration were targeting Anthropic was because the company had not offered “dictator-style praise” to former President Trump. The memo also included harsh criticism of Anthropic’s competitor, OpenAI. Amodei’s apology, posted on the company’s blog, stated that the memo did not reflect his careful or considered views and was an attempt to deescalate the sensitive situation with the Pentagon.
Amodei has also indicated that Anthropic plans to challenge the government’s decision in court. In an exclusive interview, he argued that the supply chain risk designation is punitive and inappropriate, especially given Anthropic’s contributions to U.S. national security. He expressed concern that removing Anthropic’s technology from military systems could set back operations by six to twelve months or even longer. Amodei emphasized that uniformed military officers have told him that Anthropic’s technology is essential to their work.
The military currently uses Anthropic’s AI product, Claude, in operations such as identifying targets and developing simulations, reportedly even in sensitive regions like Iran. President Trump recently announced on social media that there would be a six-month phase-out period for Claude within the military. However, experts and insiders suggest that disentangling Claude from military systems could take much longer than the proposed six months, given its deep integration into current operations.
This situation highlights the growing tension between the U.S. government and leading AI companies over issues of security, loyalty, and technological dependence. The outcome of Anthropic’s legal challenge could set a significant precedent for how the government interacts with domestic technology firms in the future. The story continues to develop as both sides prepare for a potential court battle and the military begins the complex process of phasing out Anthropic’s technology.