Sam Altman, Jakub, and Wojciech from OpenAI discussed the organization’s mission to develop accessible and safe artificial general intelligence (AGI) that benefits all humanity, highlighting advances in research, product development, infrastructure, and a new organizational structure focused on safety and societal impact. They emphasized ambitious goals for AI capabilities, transparency, user empowerment, and collaboration, while addressing challenges in alignment, privacy, and the ethical deployment of AI technologies.
In this comprehensive discussion, Sam Altman, Jakub (chief scientist), and Wojciech from OpenAI share significant updates about the organization’s mission, research, product development, infrastructure plans, and new organizational structure. They emphasize OpenAI’s core mission to ensure that artificial general intelligence (AGI) benefits all of humanity. Rather than viewing AGI as a distant or mystical oracle, they now see it as a set of powerful tools that empower people to create the future. Their vision includes building personal AGI accessible everywhere, enabling users to leverage AI for work, personal life, and scientific discovery, ultimately improving society and individual fulfillment.
Jakub delves into the research side, highlighting OpenAI’s focus on deep learning and scaling training systems. He discusses the rapid progress toward superintelligence—AI systems smarter than humans on many critical axes—and the potential for AI to accelerate scientific discovery and technological development. OpenAI has set ambitious internal goals, including developing AI research interns by September 2026 and fully autonomous AI researchers by March 2028. Safety and alignment remain paramount, with a multi-layered approach addressing value alignment, goal alignment, reliability, adversarial robustness, and systemic safety. A promising research direction is “chain of thought faithfulness,” which aims to maintain transparency into the AI’s internal reasoning processes to improve interpretability and alignment.
On the product front, OpenAI is evolving from offering AI super assistants like ChatGPT to building a broad platform where developers and companies can create diverse AI-powered services. They showcased how GPT-5 is already being used across various professions, from quantum physics to fishing, illustrating its versatility. OpenAI aims to treat users with freedom and customization, especially adult users, while balancing safety and privacy concerns. They acknowledge past mistakes in content moderation and model routing and express commitment to improving user control and transparency. Privacy is highlighted as a critical issue, given how intimately people interact with AI, necessitating strong technical and policy protections.
Regarding infrastructure, OpenAI is transparent about its massive commitments, currently totaling over 30 gigawatts of compute capacity and approximately $1.4 trillion in financial obligations over several years. They are partnering with major industry players like AMD, Nvidia, Microsoft, and Google to build data centers and chip fabrication facilities. Looking ahead, OpenAI aspires to create an “infrastructure factory” capable of producing one gigawatt of compute per week at significantly reduced costs, though this remains an aspirational goal requiring innovation and capital. They shared a video of their data center construction in Texas to illustrate the scale and complexity of this effort.
Finally, OpenAI announced a new organizational structure featuring a nonprofit foundation that governs a public benefit corporation (PBC). The nonprofit will focus on funding initiatives like using AI to cure diseases and building an AI resilience ecosystem to address risks such as biosecurity threats and societal disruptions. The discussion concluded with an extensive audience Q&A covering topics like model capabilities, safety policies, user freedom, AI’s impact on jobs, and the future of AI-human relationships. Sam, Jakub, and Wojciech emphasized their commitment to transparency, continuous improvement, and collaboration with other AI labs to ensure the safe and beneficial development of AGI.