In a recent OpenAI live stream, Sam Altman and Jakob Pachai outlined a timeline predicting an intern-level AI research assistant by September 2026 and a fully automated AI researcher by March 2028, marking a significant step toward rapid AI-driven scientific advancement. They also discussed advancements in AI autonomy, interpretability through “chain of thought faithfulness,” corporate restructuring, and future infrastructure plans, emphasizing responsible development and ongoing innovation.
In a recent OpenAI live stream, Sam Altman and Jakob Pachai revealed a surprisingly precise timeline for the development of advanced AI capabilities. They predict that by September 2026, we will have an intern-level AI research assistant capable of facilitating AI research, and by March 2028, a fully-fledged automated AI researcher will exist. This marks a critical milestone in AI development, as automated AI research will accelerate progress exponentially, limited only by the amount of compute power available. This timeline aligns closely with previous intelligence explosion predictions, suggesting a rapid move toward superintelligence once self-improving AI systems are operational.
The discussion also covered the increasing duration for which AI systems can autonomously complete tasks. Currently, AI can handle tasks lasting seconds to hours, but the goal is to extend this to days, weeks, months, and eventually years. However, the focus is not just on duration but also on efficiency—how effectively AI uses compute resources and tokens during these tasks. This capability will enable AI to autonomously conduct research in fields like biomedicine, material science, and drug discovery, potentially revolutionizing these industries by continuously generating new knowledge with minimal human intervention.
A fascinating part of the live stream was the explanation of “chain of thought faithfulness,” a concept aimed at improving AI interpretability and safety. OpenAI is working on techniques to allow models to think independently without human supervision during their reasoning process, then review their internal thought processes afterward. This approach helps ensure that AI models are aligned with human values and genuinely reflect their own reasoning rather than just responding to human expectations. The idea of giving AI “privacy” to think freely is novel and could lead to safer, more transparent AI systems.
OpenAI also shared details about their corporate restructuring and infrastructure plans. The organization now consists of the OpenAI Foundation, a nonprofit that governs the OpenAI public benefit corporation (PBC). The Foundation owns 26% of the PBC and plans to make a $25 billion commitment to health and AI resilience initiatives. OpenAI is investing heavily in building massive AI infrastructure, with plans to create factories that produce AI hardware at an unprecedented scale. Robotics will also play a role, initially to help build data centers, highlighting the ambitious scope of their future projects.
During the Q&A, Sam Altman addressed concerns about AI product addictiveness, emphasizing OpenAI’s commitment to evolving their products responsibly and avoiding the mistakes of past social media platforms. He also confirmed no plans to sunset GPT-4, acknowledging its popularity while aiming to develop better models over time. Jakob Pachai described AGI as a gradual transition rather than a single event, reinforcing the March 2028 timeline for a true automated AI researcher. Finally, they hinted at upcoming model improvements and new AI experiences, including a Windows version of ChatGPT Atlas, underscoring OpenAI’s ongoing innovation and commitment to accessible AI tools.