OpenAI is creating a new version of ChatGPT tailored for teens, featuring an age prediction system and parental controls to ensure safe and supervised use for users under 18. These measures aim to address concerns about emotional dependence and potential risks by enabling parents to monitor usage and intervene if their child shows signs of struggling with AI interactions.
OpenAI is developing a new version of ChatGPT specifically designed for teens, incorporating an age prediction system to identify users under 18. Since the minimum age to use ChatGPT is 13, this system aims to ensure that younger users have parental approval before accessing the platform. Parents will also have the option to link their accounts to their children’s, allowing them to receive notifications if their child is experiencing difficulties while using ChatGPT.
The new teen version includes parental controls, such as the ability to erase a child’s chat history to prevent certain information from being fed into the AI. The age prediction system is integrated into the algorithm itself, making it difficult for teens to bypass by simply changing their profile information. For example, typing without punctuation and in all lowercase letters may signal to the system that the user is a member of Generation T or younger.
Parents are encouraged to watch for signs that their child may be struggling with AI interactions. These signs include increased social withdrawal, spending less time with friends, declining grades, and appearing more shut down or preoccupied with devices and AI. Such behavioral changes could indicate that the child is having difficulty managing their relationship with AI tools like ChatGPT.
While these new features and guardrails represent a positive step toward protecting teens, experts acknowledge that it is still too early to determine their full effectiveness. The technology is advancing rapidly, and research is still catching up to understand the best ways to keep young users safe. Nonetheless, the introduction of these measures is seen as a helpful direction in addressing potential risks.
Research has shown that younger teens, particularly those around 13 to 14 years old, are more likely to trust AI compared to older teens aged 15 to 16. This increased trust can lead to emotional dependence on AI, which is one of the main concerns regarding problematic AI use among adolescents. The new ChatGPT version aims to mitigate these risks by providing better oversight and support for teen users and their families.