In the interview, Robert Lange discusses how true innovation in AI and scientific discovery comes from inventing and evolving new problems, not just solving fixed ones, highlighting his work on Sakana AI’s Shinka Evolve system that uses evolutionary algorithms and LLMs to efficiently generate and refine novel solutions. He emphasizes that while current AI excels at optimizing given tasks, the future lies in systems that can co-evolve problems and solutions, amplifying human creativity and democratizing scientific progress.
In this wide-ranging interview, Robert Lange discusses the intersection of evolutionary algorithms, large language models (LLMs), and scientific discovery, focusing on his work at Sakana AI and the development of the Shinka Evolve system. He draws analogies between evolution and scientific research, emphasizing that innovation often requires inventing or reformulating problems rather than just solving given ones. Current AI systems, including LLMs, excel at optimizing solutions for specified tasks but struggle with the open-ended co-evolution of problems and solutions—a process that is central to human creativity and scientific breakthroughs.
Lange explains the technical innovations behind Shinka Evolve, an evolutionary approach that uses LLMs to generate, refine, and evaluate programs efficiently. Unlike previous methods that require evaluating thousands of samples, Shinka Evolve achieves state-of-the-art results with far fewer evaluations by introducing model ensembling, adaptive prioritization, and semantic novelty detection. The system maintains diversity through a population of programs, uses both diff-based and full-rewrite mutations, and leverages multiple LLMs, dynamically selecting the best model for each context using a multi-armed bandit (UCB) approach. This allows for more efficient exploration of the solution space and the discovery of novel program variants.
A key theme is the “problem problem”: the observation that most current AI systems are limited by being given fixed problems to solve. Lange argues that true innovation often comes from inventing new, sometimes unrelated, problems that serve as stepping stones to breakthroughs. He highlights the need for systems that can co-evolve both problems and solutions, drawing inspiration from research like Kenneth Stanley’s open-endedness and Jeff Clune’s POET framework. While LLMs can combine and recombine known building blocks, they are still largely dependent on human-specified objectives and lack the intrinsic drive to explore unknown unknowns or invent new abstractions.
The conversation also touches on the broader implications for science and society. Lange is optimistic that AI will amplify human creativity rather than replace it, at least in the foreseeable future. He envisions a future where researchers act as shepherds, guiding and verifying the work of autonomous AI systems that run experiments, propose new hypotheses, and even write papers. However, he cautions that challenges remain in verification, reward hacking, and ensuring that AI-generated discoveries are meaningful and grounded in deep understanding rather than superficial novelty.
Finally, the discussion explores the future of scientific publishing, peer review, and the democratization of AI-driven discovery. Lange notes that while AI systems like AI Scientist v2 can autonomously generate workshop-level papers, true scientific breakthroughs still require human insight, verification, and creativity. He advocates for open-source approaches and collective intelligence to ensure that the benefits of AI-driven science are widely shared, rather than monopolized by a few large organizations. The interview concludes with reflections on the rapid pace of progress, the need for new human-AI interfaces, and the hope that these technologies will help tackle some of the world’s most challenging problems.