Big Tech Is RACING To Build AI Weapons

The video explores the growing race among major tech companies to develop AI-powered military technologies, highlighting ethical concerns and the conflict between Anthropic and the Pentagon over the use of AI for autonomous weapons and surveillance. It warns that government pressure is eroding tech companies’ autonomy and that the unchecked proliferation of AI weapons could lead to dangerous global consequences.

The video discusses the escalating race among major technology companies to develop artificial intelligence (AI) for military applications, particularly autonomous weapons such as drone swarms. The hosts express concern about the potential for AI-controlled weaponry and mass surveillance, highlighting the ethical and societal risks involved. They note that while many tech companies are eager to collaborate with the Pentagon, Anthropic—the company behind the Claude AI model—has attempted to set limits on how its technology can be used, specifically opposing its use for mass surveillance of Americans and fully autonomous weapons that can kill without human intervention.

Anthropic’s stance has led to a significant conflict with the Pentagon, which insists that AI companies allow their tools to be used for “all lawful purposes,” including weapons development and battlefield operations. The Pentagon has threatened severe repercussions against Anthropic, including cutting all business ties and designating the company as a “supply chain risk,” a label typically reserved for foreign adversaries. This would effectively isolate Anthropic from the entire U.S. defense sector and any companies wishing to do business with the military, putting immense pressure on the company to comply.

The video situates this dispute within a broader shift in American capitalism, where the interests of the security state increasingly override the autonomy of private enterprise. The hosts compare this to the Russian model, where the state exerts direct control over key industries, and suggest that the U.S. government is moving toward a more authoritarian approach in its dealings with tech companies. They argue that tech leaders who believe they can maintain moral constraints while working with the military are naive, as the state ultimately holds the power and will enforce its priorities.

A clip of former Google CEO Eric Schmidt is shown to illustrate the prevailing attitude among tech elites: while acknowledging the dangers of advanced AI, Schmidt argues that industry leaders can be trusted to “pull the plug” if things go wrong. The hosts criticize this view as dangerously self-important and unrealistic, pointing out that once technology is handed over to the state, private individuals lose control over its use. They draw parallels to the development of nuclear weapons, where scientists had no real say in how their creations were ultimately deployed.

Finally, the video highlights the hypocrisy of figures like Elon Musk, who has long warned about the dangers of AI but is now enthusiastically competing to provide autonomous drone technology to the Pentagon through his companies SpaceX and xAI. The hosts warn that the proliferation of cheap, autonomous weapons could lead to a global arms race, making such technology accessible to dictators, terrorists, and warlords. They conclude that the current trajectory is deeply troubling, as both the tech industry and the government appear willing to prioritize power and profit over ethical considerations and global safety.