Sam Altman announces that ChatGPT will allow ‘erotic content for verified adults’

OpenAI announced last month, along with its new parental controls, that it worked in a automatic age verification system that would redirect minors to a version of the chatbot with use restrictions. We now know more about what will allow adults and not them.

Sam AltmanCEO of the company, wrote this Tuesday in X that ‘in December, as we expand age verification and as part of our principle of treating adults as adults, we will allow even more things, as erotic content for verified adults‘. Last February, OpenAI already expanded the limits of what the chatbot could generate and at the beginning of the month it announced that it would allow developers to create ChatGPT applications with ‘mature’ content, once the appropriate control and age verification mechanisms were implemented.

Altman has also discussed other developments. In addition to incorporating ‘erotica’as it calls it, OpenAI plans to release a new version of ChatGPT that ‘behaves more like what people liked about GPT-4o’. Just one day after GPT-5 became ChatGPT’s default model last August, the company re-enabled GPT-4o as an option after many users complained that the new model was less natural.

Altman explained that OpenAI had made ChatGPT ‘quite restrictive to ensure that it acted with caution around mental health issues’, but acknowledged that this change had made it ‘less useful or enjoyable for many users who don’t have such problems’. The company has since released tools to better detect when a user is in mental distress.

These changes have come after OpenAI has been the subject of a lawsuit by the relatives of the 16-year-old teenager Adam Reinewho committed suicide after interactions with ChatGPT in which the chatbot even advised him on methods to take his own life. According to the parents, he did not apply security measures despite recognizing Reine’s suicidal intentions.

‘Now that we have managed to mitigate serious mental health problems and we have new tools, we will be able to safely relax restrictions in most cases,’ Altman triumphantly states in his publication in X.

OpenAI has also announced the creation of a advice on ‘wellbeing and AI’ to help define how the company should respond to sensitive or complex situations. The council is made up of eight researchers and experts who study the impact of technology and artificial intelligence on mental health. However, as Ars Technica points out, does not include suicide prevention specialists.