Openai and Meta adjust artificial intelligence chatbots to respond better to adolescents in crisis

Washington – Openai artificial intelligence chatbots manufacturers and target reported that they are adjusting how their chatbots answer adolescents and other users who ask questions about suicide or show signs of mental and emotional anguish.

Openai, creator of Chatgpt, announced on Tuesday that he is preparing to implement new controls that will allow parents to link their accounts to their teenage children.

Parents can choose what functions deactivate and ‘receive notifications when the system detects that their teenage son is in a moment of acute anguish’, according to a blog post that says that the changes will enter into force this fall.

Regardless of the user’s age, the company says that its chatbots will redirect the most distressing conversations to the most capable models that can provide a better response.

The announcement occurs a week after Adam Raine’s parents, 16, sued Openai and his CEO, Sam Altman, claiming that Chatgpt trained the young man of California in the planning and realization of his own life earlier this year.

Meta, the parent company of Instagram, Facebook and WhatsApp, also reported that it is now blocking that your chatbots talk to teenagers about self -harm, suicide, eating disorders and inappropriate romantic conversations, and instead directs them to expert resources. Goal already offers parental controls in adolescents.

A study published last week in the medical journal Psychiatric Services found inconsistencies in the way three popular artificial intelligence chatbots answered questions about suicide.

The study, conducted by Rand Corporation researchers, found the need for ‘a greater improvement’ in Chatgpt, Gemini de Google and Claude de Anthropic. The researchers did not study the target chatbots.

The main author of the study, Ryan McBain, said Tuesday that ‘it is encouraging to see OpenAi and Meta introduce functions such as parental controls and the routing of delicate conversations to more capable models, but these are incremental steps’.

‘Without independent security points, clinical tests and applicable standards, we still depend on companies being self -regulated in a space where adolescents are exceptionally high,’ said McBain, main researcher at RAND policies.