Ex-Google CEO "SHOCKED" by new AI capabilities | Eric Schmidt

In a discussion on the Professor G Show, former Google CEO Eric Schmidt expressed his shock at the rapid advancements in AI, particularly from China, highlighting the emergence of powerful open-source models like Deep Seek R1 that challenge previous assumptions about China’s AI capabilities. He raised concerns about the implications of these developments, including the potential militarization of AI and the risks of misinformation, advocating for international regulations to manage AI’s impact on society and prevent misuse.

In a recent discussion on the Professor G Show with Scott Galloway, former Google CEO Eric Schmidt expressed his shock at the rapid advancements in AI capabilities, particularly from China. He highlighted the emergence of powerful open-source AI models, such as the Deep Seek R1, which he noted are comparable to OpenAI’s latest models. Schmidt pointed out that these developments challenge the assumption that China is several years behind in AI technology, suggesting that the gap has narrowed significantly, potentially to within a year.

Schmidt explained that the Deep Seek R1 model utilizes a technique called “test time compute,” which allows for more computational resources during the inference phase of AI processing. This method enables the model to generate multiple answers and select the most common one, akin to asking several people for directions and choosing the most popular response. This approach has led to improved accuracy in problem-solving, particularly in complex tasks like mathematics, raising concerns about the competitive landscape of AI development.

The discussion also touched on the implications of open-source AI models, particularly the risk of China replicating advanced technologies developed in the U.S. Schmidt emphasized that the rapid release of competitive models from China undermines the argument against open-sourcing AI, as it allows for quicker advancements in their capabilities. He noted that the timeline between the release of OpenAI’s models and the Chinese counterparts was alarmingly short, indicating a significant leap in their technological prowess.

Schmidt raised concerns about the potential militarization of AI, highlighting the increasing overlap between AI technology and military applications. He called for international treaties to regulate the use of AI in warfare, advocating for agreements that would ensure human oversight in decision-making processes involving autonomous weapons. He drew parallels to the historical context of nuclear weapons treaties, emphasizing the need for proactive measures to prevent an arms race in AI weaponization.

Finally, Schmidt discussed the growing capabilities of AI in simulating human behavior and creating realistic digital personas. He warned of the potential for misinformation and manipulation through AI-generated content, which could be used to influence public opinion or create false narratives. As AI technology continues to evolve, he stressed the importance of establishing ethical guidelines and regulations to manage its impact on society and prevent misuse in both civilian and military contexts.