Anthropic has sued the Trump administration after the Department of Defense labeled it a supply chain risk, effectively banning the AI company from most federal contracts following a dispute over the use of its technology in sensitive military applications. The company argues the ban is unlawful retaliation for its commitment to responsible AI, leading to significant revenue losses and strained relationships with other federal contractors.
Anthropic, an artificial intelligence company, has filed a lawsuit against the Trump administration after the U.S. Department of Defense officially labeled it a supply chain risk. This designation effectively bans Anthropic from most federal contracts. The move follows a public dispute between Anthropic and the Pentagon regarding restrictions on the use of Anthropic’s AI technology, particularly concerning its application in sensitive areas such as surveillance and autonomous weapons.
In its lawsuit, filed in a federal court in California, Anthropic argues that the government’s actions are both unprecedented and unlawful. The company claims it is being punished for its protected speech and for adhering to its principles of responsible AI development. Anthropic is seeking a court order to halt the Pentagon’s designation, which it describes as an unlawful campaign of retaliation by the Trump administration.
The lawsuit references social media posts from President Donald Trump and Defense Secretary Pete Hegseth, suggesting these posts influenced other government agencies to follow suit. As a result, departments like the Treasury and the Federal Housing Finance Agency have reportedly ended their use of Anthropic’s AI assistant, Claude. Anthropic contends that this coordinated response has led to immediate and potentially irrecoverable revenue losses.
Anthropic further argues that the government’s actions are causing other federal contractors to reconsider their relationships with the company. Some are raising concerns, pausing collaborations, or even considering terminating contracts due to fears about the supply chain risk designation. The company maintains that it is being forced to choose between compromising its core values on AI safety or facing significant harm from the federal government.
The conflict escalated after Anthropic refused to remove certain safety guardrails from its AI, specifically those preventing its use for mass domestic surveillance and fully autonomous weapons—uses the Pentagon disputes were ever intended. Despite holding a $200 million contract with the Defense Department, Anthropic’s stance on responsible AI has put it at odds with federal officials. The Defense Department declined to comment on the ongoing litigation.