As AI nears agency, this "godfather of AI" warns of the risks and shares a bold plan. #TEDTalks

The speaker warns of the escalating risks posed by rapidly advancing AI technologies, highlighting the lack of regulation and the potential for misuse, including the creation of dangerous weapons. They propose developing “scientist AI,” a non-agentic system focused on making reliable predictions to ensure AI safety, and urge the AI community and policymakers to prioritize and accelerate efforts to control AI’s impact responsibly.

In this TED Talk, the speaker highlights the stark contrast between the regulation of everyday items, like sandwiches, and the rapidly advancing field of artificial intelligence (AI). Despite hundreds of billions of dollars being invested annually in AI development, there remains a significant lack of oversight and control. Major companies aim to create machines that surpass human intelligence and replace human labor, yet society has not established reliable methods to ensure these machines will not act against human interests.

The speaker emphasizes growing concerns from national security agencies worldwide about the potential misuse of AI technology. One alarming risk is that the advanced scientific knowledge embedded in AI systems could be exploited to create dangerous weapons, possibly by terrorist groups. This threat level has recently escalated; for instance, OpenAI’s 01 system was assessed last September, and the risk associated with such AI systems was raised from low to medium—just one step below what is considered acceptable.

The current trajectory of AI development is likened to blindly driving into a fog, with the potential for humanity to lose control over these powerful technologies. However, the speaker offers a note of cautious optimism, stating that there is still some time to address these challenges. The urgency to act is clear, as the consequences of inaction could be severe and irreversible.

To tackle these risks, the speaker and their team are developing a technical solution called “scientist AI.” This concept is inspired by the ideal of a selfless scientist whose sole purpose is to understand the world without agency or personal motives. Unlike autonomous AI agents that take actions, scientist AI focuses on making accurate and trustworthy predictions about the consequences of potential actions, serving as a safeguard against harmful behaviors by untrusted AI systems.

The talk concludes with a call to action for the AI research community and policymakers to prioritize scientific projects aimed at AI safety. The speaker stresses the need for rapid development and deployment of such solutions to ensure that AI technologies evolve in a controlled and beneficial manner. This proactive approach is essential to prevent the risks associated with advanced AI from becoming a reality.