The Pentagon Is Using AI to Hunt People

The video exposes a controversy where leading AI companies Anthropic and OpenAI clashed with the Pentagon over the use of their AI models for autonomous weapons and mass surveillance, with Anthropic resisting and being blacklisted, while OpenAI ultimately complied. It highlights growing concerns about the lack of ethical restraint and oversight as powerful AI technologies are increasingly deployed for military and surveillance purposes.

The video discusses a major controversy involving leading AI companies Anthropic and OpenAI, and their contracts with the U.S. government, particularly the Pentagon. The conflict began when Anthropic, which had signed a $200 million contract to provide its AI model Claude for classified government use, discovered its technology was allegedly used in a military operation in Venezuela in ways that may have violated their agreed-upon restrictions. This revelation led to a breakdown in trust and negotiations, with the Pentagon reacting aggressively and labeling Anthropic a “supply chain risk,” effectively blacklisting them from government contracts and partnerships with major vendors.

A central issue in the dispute is the Pentagon’s desire to use advanced AI for two controversial purposes: fully autonomous weapons and mass surveillance of U.S. citizens. Anthropic had initially insisted on restrictions preventing its AI from being used in these ways, arguing that the technology was not yet reliable or safe enough for autonomous weapons, and expressing concerns about the potential for mass domestic surveillance. Despite these concerns, the Pentagon pushed back, seeking to remove these restrictions and arguing for broader, less constrained use of the technology.

The video also highlights the role of OpenAI, which initially appeared to support Anthropic’s stance but ultimately agreed to the Pentagon’s terms, allowing its models to be used for “all lawful purposes.” This vague language raised alarms, as it could encompass mass surveillance and autonomous weapons, depending on how the government interprets legality. OpenAI’s public statements were criticized for being evasive, and the company faced significant backlash online, including repeated community notes on their social media posts contradicting their claims of maintaining ethical boundaries.

Public reaction to the controversy was intense, especially on social media, where many users rallied behind Anthropic, viewing them as more principled compared to OpenAI. This surge in support led to a spike in subscriptions for Claude, Anthropic’s AI model, even causing service outages due to high demand. However, the video cautions against idealizing Anthropic, noting that they are still willing to develop AI for military use once they believe the technology is ready, and that both companies are fundamentally aligned with government interests in the long run.

Ultimately, the video frames this episode as a watershed moment in the relationship between Silicon Valley and the U.S. government regarding AI ethics and deployment. It underscores the lack of meaningful restraint from both tech companies and government agencies when it comes to deploying powerful AI for surveillance and warfare. The hosts warn that unless there is significant public oversight and regulation, these technologies will likely be used in increasingly invasive and potentially dangerous ways as soon as the companies deem them sufficiently advanced.