Anthropic might be DONE (48 hours left)

Anthropic, an AI company known for its strong safety commitments, is under intense pressure from the U.S. Department of Defense to remove its AI safety restrictions or face severe consequences, including loss of contracts and possible blacklisting. With only 48 hours left to comply, Anthropic has already softened its safety policies, raising concerns about the future independence of AI companies under government demands.

Anthropic, an AI company founded by former OpenAI employees with a strong commitment to AI safety, is currently facing a major crisis involving the U.S. Department of Defense (DoD). The company’s original mission was to build the world’s most powerful and safest AI, with public promises to halt development if safety could not keep pace with capabilities. However, recent events have put this identity in jeopardy. Specifically, Anthropic’s AI model, Claude, was reportedly used in a covert U.S. military operation in Venezuela, raising concerns that the company’s technology is now being employed for lethal military purposes—precisely what their safety policies were meant to prevent.

This incident triggered significant tension between Anthropic and the Pentagon. The DoD, through both public statements and direct meetings, made it clear that it expects its AI contractors to fully support any lawful military use of their technology, without imposing company-specific safety guardrails. In January, the Defense Secretary issued a new policy requiring all AI defense partners to remove such restrictions, giving companies 180 days to comply. With the deadline just 48 hours away, Anthropic is under intense pressure to align with these requirements or face severe consequences.

The Pentagon has several powerful tools at its disposal to compel Anthropic’s compliance. These include invoking the Defense Production Act to force the company to provide its technology, designating Anthropic as a supply chain risk (which would effectively blacklist them from federal contracts), and canceling existing contracts worth up to $200 million. While losing the contract would be financially significant, the first two measures could be existential threats to Anthropic’s business and reputation, potentially isolating them from both government and private sector partners.

In response to this pressure, Anthropic has updated its Responsible Scaling Policy (RSP) to version 3.0, notably removing its categorical commitment to halt AI development if safety measures are inadequate. The new policy introduces softer conditions, stating that Anthropic will only consider pausing development if it is both the clear leader in AI and if the risk of catastrophe is deemed material. This shift is widely interpreted as a concession to government demands, signaling that Anthropic may be preparing to comply with the Pentagon’s requirements, despite its previous ethical red lines against autonomous weapons and domestic surveillance.

The outcome of this standoff will have far-reaching implications for the future of AI governance and the relationship between private AI labs and national security interests. If Anthropic yields, it sets a precedent that no AI company can maintain independent safety policies when faced with government pressure. Conversely, if it resists, it risks severe legal and economic repercussions. The next 48 hours are critical, as they will reveal whether Anthropic can uphold its ethical commitments or if the realities of national security will force it to compromise, potentially reshaping the landscape for all AI developers in the U.S.