OpenAI has announced a significant policy change for its popular AI model, ChatGPT. According to CEO Sam Altman, the platform will soon permit users to engage in conversations classified as "erotica" once a new age verification system is implemented. This change is scheduled to take effect in December and is part of a broader initiative to provide more flexibility for adult users.
Key Takeaways
- OpenAI will allow "erotica" and other mature content on ChatGPT for users who complete an age verification process.
 - The new age-gating system is planned for a December rollout.
 - This policy shift follows user feedback about recent model updates being too restrictive and less personable.
 - The company has also introduced new tools to detect user mental distress and has formed a council on AI and well-being.
 
A New Approach to Adult Content
In a public statement, OpenAI CEO Sam Altman confirmed the company's plans to adjust its content policies. The move is framed around a principle of treating "adult users like adults." Once the new age-gating features are fully operational, verified adult users will have access to a wider range of conversational topics, including those of a mature nature.
"As we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults," Altman stated. This announcement formalizes previous indications from the company that it would explore allowing developers to create "mature" applications on its platform after establishing appropriate controls.
Industry Context
OpenAI is not the first major AI company to venture into more mature user interactions. Elon Musk's xAI, for example, has already introduced flirty AI companions within its Grok application, which are represented by 3D anime-style models. This trend suggests a growing industry recognition of user demand for more personalized and less restricted AI experiences.
Balancing User Experience and Safety
The decision to relax content restrictions comes after recent user feedback regarding OpenAI's language models. When GPT-5 was briefly made the default model, many users expressed dissatisfaction, claiming it was less personable and engaging than its predecessor, GPT-4o. In response, OpenAI reinstated GPT-4o as an available option.
Altman addressed this feedback directly, explaining the company's initial caution. He noted that OpenAI had made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues." However, he acknowledged that this approach made the chatbot "less useful/enjoyable to many users who had no mental health problems."
"Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases," Altman explained, linking the policy change to the development of new safety features.
New Measures for User Well-being
To support this policy shift, OpenAI has developed and launched new internal tools designed to better identify when a user may be experiencing mental distress. These systems are intended to provide a safety net, allowing the company to relax general restrictions while still intervening in potentially harmful situations.
In addition to these technical safeguards, OpenAI has established a new advisory body called the council on "well-being and AI." This council is tasked with providing guidance on how the company should handle complex and sensitive user scenarios.
Council Composition and Criticism
The newly formed council consists of eight researchers and experts who specialize in the impact of technology and AI on mental health. However, the group has drawn some criticism for its composition. According to reports, the council does not include any suicide prevention experts, a group that has previously urged OpenAI to implement stronger safeguards for users expressing suicidal thoughts.
The Path Forward for ChatGPT
The upcoming changes represent a significant evolution in OpenAI's philosophy on content moderation. By introducing age verification, the company aims to create a tiered system that offers more freedom to adults while maintaining a safer environment for younger users. The success of this initiative will likely depend on the effectiveness of its age-gating technology and the ability of its new well-being tools to manage sensitive interactions responsibly.
This move also signals a potential adjustment in the next version of ChatGPT. Altman hinted that a future release will aim to restore some of the more personable qualities that users preferred in the GPT-4o model, suggesting that user feedback is playing a crucial role in the platform's development.





