Ex Google CEO Calls For Mutually Assured AI-Destruction

Former Google CEO Eric Schmidt and co-authors propose a controversial strategy likening AI development to “mutually assured destruction,” suggesting that nations should prepare to sabotage rival AI projects to deter the pursuit of superintelligent AI. In contrast, Esther Dyson advocates for requiring AI companies to obtain insurance for potential damages, promoting risk assessment and mitigation, while the discussion highlights the need for a balanced approach to AI safety amidst growing concerns.

In a recent discussion, former Google CEO Eric Schmidt, along with co-authors, has proposed a controversial strategy regarding the development of artificial intelligence (AI). Their strategy paper suggests that we are approaching a scenario akin to “mutually assured destruction” (MAD) in nuclear warfare, but in the context of AI. They argue that the threat of catastrophic retaliation could deter nations from pursuing superintelligent AI, which they refer to as “mutually assured AI malfunction” (MAIM). This concept posits that if one nation attempts to monopolize AI development, others would respond with debilitating countermeasures, thus creating a stable deterrence regime.

The authors express concern over the potential dangers of AI, particularly its misuse by malicious actors for military or terrorist purposes, as well as the risks posed by AI systems that may become uncontrollable. They highlight that AI safety has been neglected, especially following changes in U.S. policy that reduced safety regulations. In light of this, the paper outlines a strategy where states should enhance their capabilities to sabotage destabilizing AI projects in other countries. This includes cyberattacks, espionage, and other disruptive tactics aimed at undermining rival AI developments.

The recommendations in the strategy paper are stark and raise ethical questions about the future of international cooperation in AI research. The authors suggest that states should make their attack plans known to ensure that potential aggressors are deterred. They also propose building data centers in remote locations to minimize casualties in the event of an attack. This grim outlook reflects a shift away from collaboration towards a more adversarial stance in AI development, which many find troubling.

In contrast to this pessimistic view, Esther Dyson offers a more optimistic and practical solution: requiring AI companies to obtain insurance for the potential damages their products may cause. This approach would incentivize both insurance companies and AI developers to assess and mitigate risks associated with AI systems. However, the potential increase in costs for AI products due to insurance requirements may limit enthusiasm for this solution among stakeholders.

The video concludes with a reflection on the broader implications of AI development and safety. While concerns about superintelligent AI are valid, there is a risk of becoming desensitized to these warnings as they become more commonplace. The speaker humorously suggests that perhaps the fears surrounding superintelligence are exaggerated, likening the situation to a joke about outrunning a bear. Ultimately, the discussion emphasizes the need for a balanced approach to AI safety, combining caution with innovative solutions to ensure a safer future.