How should AI behave, and who should decide? – OpenAI cofounder John Schulman

John Schulman, co-founder of OpenAI, stresses the importance of defining how AI models should behave, ensuring they act as an extension of people’s will. OpenAI’s Model Spec document outlines guidelines for AI behavior in API and chat applications, aiming to balance stakeholders’ interests while prioritizing helpfulness and avoiding harm.

John Schulman, the co-founder of OpenAI, discusses the importance of defining how AI models should behave and who should have the authority to make those decisions. He emphasizes that AI models should primarily act as an extension of people’s will and avoid being too paternalistic or imposing opinions on users. OpenAI has released a document called the SPE Model Spec outlining how models should behave in their API and chat applications, considering the interests of different stakeholders involved, such as end users, developers, the platform (OpenAI), and humanity at large.

The stakeholders may have conflicting demands, and OpenAI faces the challenge of resolving these conflicts when they arise. The priority is for AI models to follow instructions and be helpful to users and developers, but situations where these actions could harm others or infringe on their well-being need to be carefully considered. OpenAI may need to block certain types of usage if they pose a risk to others. The goal is to balance the preferences of various stakeholders while ensuring the overall impact is positive and not harmful to individuals or society.

Schulman acknowledges that preferences and values can be subtle and complex, often challenging to articulate in a straightforward instruction manual. He mentions the use of preference models that can capture these nuances and subtleties in user preferences. The Model Spec document includes numerous examples and explanations to guide AI behavior and decision-making based on these nuanced preferences, aiming to make it actionable and practical for implementation.

The document focuses on addressing edge cases and non-obvious situations where AI behavior may have significant implications. By explicitly stating these edge cases and reasoning through them, OpenAI aims to ensure that the AI models behave sensibly and ethically in a wide range of scenarios. Schulman highlights the importance of making the guidelines in the Model Spec actionable and reflective of real-world complexities, rather than merely stating general principles. The emphasis is on providing practical insights and examples to guide AI behavior effectively in various situations, considering the interests of all stakeholders involved.