Anthropic has sued the Trump administration after the Department of Defense labeled it a supply chain risk and banned it from most federal contracts, following a dispute over Anthropic’s refusal to relax AI safety guardrails for military use. The company claims the ban is unlawful retaliation for its stance on responsible AI, threatening its business and highlighting tensions between tech firms and the government over ethical AI deployment.
Anthropic, an artificial intelligence company, has filed a lawsuit against the Trump administration after the Department of Defense officially labeled it a supply chain risk, effectively banning the company from most federal contracts. This action came just days after a public dispute between Anthropic and the Pentagon regarding restrictions on the use of Anthropic’s AI technology. The company claims that the government’s move is both unprecedented and unlawful, arguing that it amounts to retaliation for Anthropic’s protected speech and its stance on responsible AI development.
In its lawsuit, filed in a federal court in California, Anthropic asks the court to halt the Pentagon’s designation. The company describes the government’s actions as an “unlawful campaign of retaliation,” citing social media posts from President Donald Trump and Defense Secretary Pete Hegseth as evidence of coordinated government pressure. Anthropic alleges that other agencies, including the Treasury Department and the Federal Housing Finance Agency, quickly followed suit by ending their use of Anthropic’s AI assistant, Claude.
Anthropic argues that the supply chain risk label will cause immediate and irrecoverable revenue losses if it loses current and future federal contracts. The company also claims that other federal contractors are now raising concerns, pausing collaborations, and considering terminating contracts with Anthropic due to fears about the government’s designation. This, according to Anthropic, threatens its business and reputation in the broader technology sector.
The core of the dispute centers on Anthropic’s refusal to remove safety guardrails from its AI technology, specifically restrictions against using its AI for mass domestic surveillance and fully autonomous weapons. Despite holding a $200 million contract with the Defense Department, Anthropic maintained its commitment to responsible AI use, which it says led to the government’s punitive actions. The Pentagon, however, disputes that it intended to use Anthropic’s technology for these controversial purposes.
When asked for comment, the Defense Department declined, citing ongoing litigation. Anthropic’s lawsuit frames the government’s actions as an attempt to force the company to abandon its core principles or face severe consequences. The case highlights growing tensions between tech companies and the federal government over the ethical use of artificial intelligence, especially in sensitive areas like defense and surveillance. For further details, readers are encouraged to consult the full article linked in the description.