The Pentagon has ordered the removal of Anthropic’s Claude AI technology from key national security systems due to concerns over supply chain risk, requiring its elimination from critical defense infrastructure within six months. This move highlights the deep integration of Claude in sensitive military operations and raises significant challenges for the Department of Defense in finding suitable AI replacements.
CBS News obtained an internal Pentagon memo revealing that the Department of Defense (DoD) has ordered the removal of Anthropic’s AI technology, specifically its Claude AI assistant, from key national security systems. The memo, dated March 6th, instructs senior leaders and commanders to eliminate Anthropic’s products from critical defense infrastructure within six months. This marks the first public insight into how deeply Anthropic’s AI has been integrated into some of the most sensitive military programs.
The list of affected systems is extensive and includes national security systems, strategic priorities, nuclear weapons, nuclear command, control and communications, continuity of government, ballistic missile defense, and warfighting capabilities at high-risk cyber survivability levels. The breadth of this list demonstrates just how embedded Claude has become in the U.S. military’s operations, particularly in areas that are vital to national defense and security.
Anthropic declined to comment on the memo but has responded by filing two lawsuits against the federal government. The company alleges that the government has illegally retaliated by designating Anthropic as a “supply chain risk,” a label that has never before been applied to an American company. In addition, Anthropic has sought an injunction in court to halt the government’s actions, though this has not yet been granted. The situation involves significant financial stakes for both the company and the government.
A source familiar with Claude’s military usage clarified that the AI’s primary function within the military has been to process and analyze large volumes of data. Claude assists by sifting through intelligence reports, synthesizing patterns, summarizing findings, and surfacing relevant information much faster than human analysts could. Despite a government-wide ban on Anthropic, sources confirmed that the Pentagon has continued to use Claude during the ongoing conflict in Iran, highlighting the AI’s operational importance.
The Pentagon now faces the complex challenge of disentangling Claude from its infrastructure and finding a suitable replacement. The process is expected to be difficult and time-consuming, given the extent of Claude’s integration and the growing reliance on AI technologies within the military. The situation raises important questions about the future of AI in defense, the security of supply chains, and how quickly alternative solutions can be implemented to maintain critical military capabilities.