Ilya Didn’t Leave OpenAI for Nothing

Ilya Sutskever left OpenAI to found Safe Super Intelligence (SSI), aiming to revolutionize AI by developing more brain-like, biologically plausible learning algorithms that overcome the limitations of current backpropagation methods. SSI is exploring alternatives such as predictive coding, which could enable faster, more robust, and adaptive AI, potentially marking a major leap forward in artificial intelligence.

About two years ago, Ilya Sutskever left OpenAI and founded Safe Super Intelligence (SSI), which has since reached a valuation of at least $32 billion. Despite the secrecy surrounding SSI’s technology, recent developments and academic papers suggest that Sutskever is addressing a fundamental problem in AI: the way neural networks are trained. Current AI models, inspired by biological neurons, use algorithms that are not biologically plausible, particularly in how they learn and update their parameters. The main challenge is to find a more effective and biologically grounded learning algorithm.

Today’s AI relies on backpropagation with gradient descent, where a loss function measures the model’s error, and backpropagation adjusts the network’s parameters to minimize this loss. However, this approach is overly simplistic compared to how the brain likely operates. The brain doesn’t optimize for a single loss function, and its learning is far more complex and dynamic. Experts like Sutskever and Adam Marblestone suggest that the brain uses multiple, evolving loss functions and a more sophisticated, omnidirectional inference process, allowing for rapid and robust learning.

Backpropagation has several key limitations. First, it is highly inefficient—humans can generalize from just a few examples, while AI models require massive amounts of data. Second, it separates learning and information processing into distinct phases, unlike the brain, which learns and processes information simultaneously. Third, backpropagation relies on a global error signal, whereas the brain operates with local autonomy and coordination through neuromodulators like dopamine. These differences make continuous, dynamic learning difficult for current AI systems.

A promising alternative theory gaining traction is predictive coding, which posits that the brain’s main objective is to predict sensory input, learning from the difference between prediction and reality. Instead of immediately updating network connections, predictive coding allows the system to first adjust its neural activity to reconcile discrepancies, only then updating the wiring. This approach avoids catastrophic interference—where learning new information erases old knowledge—and enables faster adaptation to new situations, as demonstrated in recent neuroscience experiments.

Despite its promise, predictive coding and similar approaches face practical challenges. They are currently more computationally expensive than backpropagation, partly because today’s hardware is not optimized for the required simulations. However, researchers are exploring new hardware solutions and integration with existing AI pipelines. SSI, with its research focus and significant funding, is uniquely positioned to push these ideas forward. While it remains to be seen whether Sutskever’s company will succeed, the pursuit of more brain-like learning algorithms could represent the next major leap in artificial intelligence.