Google DeepMind has reversed its previous commitment by signing a Pentagon contract allowing its AI technology to be used for classified military purposes, sparking significant unrest and unionization efforts among employees concerned about ethical implications and potential misuse in lethal weapons and surveillance. DeepMind workers are actively organizing to negotiate protections against harmful applications of AI, highlighting the critical role of tech labor in influencing the ethical direction of AI development amid growing military and political pressures.
Google DeepMind, the UK-based AI division of Google, recently reversed its earlier commitment not to develop AI for weapons use by signing an expanded contract with the Pentagon. This new agreement allows the US military to use Google’s AI technology for any lawful government purpose, including classified military operations, without Google having veto power over its applications. This move has sparked significant unrest among DeepMind employees, who fear their work could be used for lethal autonomous weapons, mass surveillance, and other harmful purposes. In response, staff have called for a carve-out to exclude classified workloads, similar to the stance taken by competitor Anthropic, which refused such terms and was subsequently sidelined by the Pentagon.
The history of Google’s involvement with military contracts is contentious. In 2017, Google participated in Project Maven, a Pentagon initiative to integrate AI into military functions like target selection and surveillance. However, employee protests led Google to withdraw from the project in 2018. The contract was then taken over by Palantir, whose technology has been implicated in controversial military strikes, including one that killed over 150 civilians in Iran. Despite the reputational risks, the Pentagon’s increasing budget for autonomous warfare and the lucrative nature of defense contracts create strong incentives for tech companies like Google to engage with the military-industrial complex.
In the UK, DeepMind employees are actively unionizing to oppose the use of their work in military applications. After voting to unionize, they have formally requested recognition of their unions to negotiate terms that would prevent their AI from being used in harmful ways. Anonymous interviews with DeepMind workers reveal deep concerns about the lack of transparency and monitoring in classified military uses of AI, as well as fears that guardrails could be bypassed. They also speculate that Google’s willingness to cooperate with the Pentagon may be influenced by political and economic pressures, including fears of regulatory actions and close ties between company leadership and the US government.
Beyond military concerns, DeepMind workers also worry about broader ethical issues surrounding AI, including job automation and existential risks. While some fear that AI could eventually pose catastrophic threats to humanity, many prioritize addressing immediate societal harms and human rights issues. They emphasize the urgency of collective action now, before automation potentially diminishes their bargaining power. The unionization effort is seen as a critical means for workers to assert control over the ethical direction of AI development, especially given the rapid pace of technological advancement and the limited effectiveness of government regulation.
The broader discussion highlights the unique role of tech workers as political agents in shaping the future of AI. Unlike traditional labor disputes, their organizing efforts intersect with profound ethical questions about the use of powerful technologies. Experts note that worker power may be one of the few effective checks on the deployment of AI for harmful purposes, especially as companies like Google wield immense influence over global data and information. While consumer action is complicated by the ubiquity of Google’s products, public campaigns and local government advocacy remain important. Ultimately, the unionization of DeepMind employees represents a significant front in the struggle to ensure AI benefits humanity rather than contributing to militarization, surveillance, and societal harm.