What if AI-run companies outcompete human ones? – OpenAI cofounder John Schulman

John Schulman, co-founder of OpenAI, discusses the potential scenario where AI-run companies may outcompete human-run ones, emphasizing the importance of human oversight in directing AI towards specific pursuits due to its lack of intrinsic desires. Schulman raises concerns about the economic equilibrium and potential risks associated with AI-run companies, suggesting the need for regulations, global collaboration, and a balance between human oversight and AI capabilities to ensure successful and sustainable business operations.

John Schulman, co-founder of OpenAI, discusses the potential scenario where AI-run companies outcompete human-run ones. He suggests that it may not be ideal to have AI completely running firms without human oversight. Schulman emphasizes the importance of humans overseeing important decisions and directing AI towards specific pursuits, as AI lacks intrinsic desires unless programmed to have them. He believes that even if AI becomes highly capable, humans should still be the ones driving what AI does, due to differing interests and ideas.

Schulman raises concerns about economic equilibrium, suggesting that firms with humans in the decision-making loop may be outcompeted by those without. He questions the sustainability of keeping humans in the loop if firms without human involvement prove to be more efficient and successful. He ponders whether regulations may be necessary to ensure human oversight in important processes within AI-run companies, or if global collaboration is needed to address this potential issue.

The co-founder contemplates the possibility of AI-run companies being better in most aspects but having higher tail risks due to potential malfunctions in extreme situations. Schulman suggests that AI-run entities might have some serious problems despite appearing better in the short term. He highlights the importance of practical considerations and the role of humans in the loop for the foreseeable future, considering factors such as accountability, alignment, and potential risks associated with AI running firms.

Schulman discusses the need for accountability and liability in AI-run companies, which could alter incentives and decision-making processes. He acknowledges the theoretical scenario where AI is entirely benevolent, better at running companies, and accountable to people, but notes that this ideal situation may be far-fetched. Ultimately, Schulman suggests that practical considerations and potential risks associated with AI may push towards keeping humans in the loop for the time being, even if AI appears to be more efficient in certain areas. He emphasizes the need for a balance between human oversight and AI capabilities to ensure successful and sustainable business operations.