Madrid- Meta spokeswoman, Stephanie Otway, He has recognized as an “mistake” that the company’s chatbot has spoken with teenagers about issues such as autolesions, suicide, eating disorders or potentially inappropriate romantic conversations.
Otway has made these statements to the American Technological Technological Portal Techcrunch, two weeks after the publication of a Reuters investigation report on the lack of protection measures of the protection of the protection of the protection of the Artificial intelligence (AI) For minors by the company’s platforms, such as WhatsApp, Instagram, Facebook either Threads.
Chatbot are digital tools with which a conversation can be maintained, and Mark Zuckerberg’s multinational spokeswoman has recognized that their platforms have used them to talk to adolescents on the aforementioned issues.
Otway has assured that from now on they will train their chatbots to stop interacting with adolescents on these issues: “These are provisional changes, since in the future we will launch more solid and lasting security updates for minors.”
1 /12 | How to create an image using the artificial intelligence of Microsoft: Copilot. Among the abilities that Microsoft’s artificial intelligence has, Copilot, is to generate images like this according to the instructions and need of users. – Artificial intelligence
“As our community grows and technology evolves, we continually learn about how young people can interact with these tools and reinforce our protections accordingly,” he continued.
The company will also limit adolescents to certain characters from AI who could maintain “inappropriate conversations.”
Some of the characters of AI created by the users that Meta has made available to Instagram or Facebook include sexualized chatbots such as “Step Mom” or “Russian Girl.”