AI is Now Being Used in War

The video examines how advanced AI is rapidly being integrated into military operations, raising concerns about mass surveillance, autonomous weapons, and the erosion of privacy, especially after OpenAI agreed to government demands that Anthropic had refused. It urges viewers to protect their data and calls for stronger safeguards and public vigilance to prevent the misuse of AI in warfare and surveillance.

The video explores the rapid integration of artificial intelligence (AI) into modern warfare, focusing on recent high-profile cases involving the U.S. military. The Pentagon reportedly used Anthropic’s AI model, Claude, in operations such as the attempted capture of Venezuelan President Nicolás Maduro and in military actions involving Israel and Iran. These AI systems, developed specifically for military use, are far more advanced than consumer-facing models, running on dedicated hardware and processing vast amounts of classified data to identify and prioritize targets in real time. The video highlights how the use of AI has dramatically accelerated the pace and scale of military operations, such as enabling the targeting of 1,000 sites in Iran within 24 hours.

Tensions arose between Anthropic and the U.S. government when the Pentagon demanded that Anthropic allow its AI to be used for all lawful purposes, including autonomous weapon control and mass surveillance of U.S. citizens. Anthropic refused to comply, insisting on safeguards against mass surveillance and fully autonomous lethal decisions without human oversight. As a result, the company was cut off from all government contracts and labeled a supply chain risk, a move that sparked controversy and bipartisan criticism. The government’s demands raised concerns about the potential for AI-powered mass surveillance using data purchased from brokers, including Americans’ geolocation, web browsing, and financial information.

In a surprising turn, just hours after Anthropic’s fallout with the Pentagon, Sam Altman of OpenAI announced that his company would take the deal Anthropic had rejected. This move was widely criticized as a betrayal of OpenAI’s original mission to benefit humanity, especially since the company had recently shifted from a nonprofit to a for-profit model and removed “safety” from its mission statement. The rushed nature of the deal and lack of clarity about its safeguards led to public backlash, with millions reportedly boycotting OpenAI’s products and a significant “Quit GPT” movement gaining traction online and in real-world protests.

The video also discusses the broader implications of AI in warfare and surveillance, noting that NATO and other governments are adopting similar technologies. The potential for AI to enable real-time, population-wide surveillance is likened to dystopian fiction, especially as companies like Meta (formerly Facebook) develop products capable of identifying individuals in public using AI-powered glasses. The speaker warns that the combination of government access to private data and advanced AI analytics could erode privacy and civil liberties on an unprecedented scale.

Finally, the video urges viewers to take practical steps to protect their data, such as removing themselves from data broker lists, and calls for stronger legal protections against mass surveillance. The speaker emphasizes the need for public awareness and action to prevent the misuse of AI, arguing that while AI can be beneficial in individual cases, its unchecked use in warfare and surveillance poses significant risks to society. The episode concludes with a reflection on the direction of technology and a call for collective vigilance to ensure AI serves the public good rather than enabling harmful abuses.