The video examines the controversial use of AI tools like Anthropic’s Claude in military targeting during the US-Iran conflict, highlighting a tragic missile strike that killed over 165 Iranian schoolgirls and raising concerns about AI’s lack of moral judgment and accountability. It warns that the integration of AI into military and surveillance systems risks escalating civilian harm, eroding privacy, and enabling authoritarian abuses that could ultimately impact societies worldwide.
The video discusses a controversial incident during the recent US-Iran conflict, where a missile strike reportedly killed at least 165 Iranian schoolgirls at an elementary school in Minab. The school was located near a significant IRGC naval base, raising questions about whether the strike was a targeting error, faulty intelligence, or something else. US and Israeli military spokespeople denied knowledge of the school being hit and emphasized that the US does not intentionally target civilians, contrasting their actions with those of the Iranian regime. However, the incident is under investigation, and the explanations provided are widely seen as inadequate given the scale of civilian casualties.
A central focus of the discussion is the role of artificial intelligence, specifically Anthropic’s AI tool Claude, in generating target lists for military operations. The Pentagon recently labeled Claude a national security threat after Anthropic refused to grant the military unrestricted access to its models. Anthropic’s CEO, Dario Amod, expressed concerns about AI lacking the judgment of human soldiers, potentially leading to tragic mistakes like friendly fire or civilian deaths. In response, US officials criticized Anthropic for prioritizing corporate ethics over national security, insisting on full access to AI tools for defense purposes.
The video highlights how AI-powered systems, such as Maven paired with Claude, have accelerated the pace of military targeting, identifying thousands of targets within hours. While proponents argue that AI can make targeting more precise and reduce civilian casualties, critics warn that AI lacks moral judgment and can perpetuate or even intensify existing biases and errors. The discussion draws parallels to the use of AI in the ongoing conflict in Gaza, where similar tools have been used to identify and target individuals, often with devastating consequences for civilians.
The speakers also address broader concerns about the use of AI in policing and surveillance, noting that predictive algorithms introduced to reduce human bias often end up reinforcing and accelerating existing prejudices. They cite examples such as the now-defunct Gangs Matrix in Britain, which used proxy data to flag young people as potential gang members, often without transparency or accountability. This reliance on opaque algorithms makes it difficult to trace or challenge decisions, further eroding accountability and increasing the risk of harm to innocent people.
Finally, the video underscores the dangers of data privacy erosion and the merging of corporate, military, and state interests in the deployment of AI technologies. The speakers argue that widespread use of AI in everyday life, from email writing to intimate relationships, feeds into a larger system that can be weaponized against civilians both abroad and at home. They stress the importance of data sovereignty and privacy as essential safeguards against tech-enabled authoritarianism and warn that the consequences of these technologies will inevitably “boomerang” back to affect everyone, not just those in conflict zones.