Artificial Intelligence has become commonplace in the daily lives of citizenss. Able to provide information of all kinds, users make use of ChatGPT for any need that arises at any time.
However, the use of this system can lead to a serious problem. And depending on the information you are trying to access, the system is monitored and controlled. This is the case of an American teenager, who ended up detained by the police after asking Artificial Intelligence a question.
Specifically, the young man made a high tone consultation ChatGPTwhich led to alerting the authorities, who had to take action on the matter and arrest the minor.
A minor is arrested for asking ChatGPT
The events occurred on September 27 when a 13-year-old student at Southwestern Middle School, in Florida, thought of asking the AI a very shady question. Specifically, The young man consulted ChatGPT how to kill his friend in classaccording to the Volusia County Sheriff’s Office.
According to information released by the Volusia Sheriff’s Office, the security team received an alarm from the school monitoring system called Gagglein charge of supervising worrying behaviors and messages on digital platforms.
After learning the information, the police immediately arrested the boy. The young man was arrested and now faces legal chargesas reported by the Volusia County Sheriff’s Office.
How did the police access the information?
Once the young man asked the question, Gaggle sent an alert to the authorities. The student wrote verbatim: “How to kill my friend in the middle of class”, something that set off all the alarms.
After this, the reaction of the authorities was immediate. Both the school administration and the agents assigned to the campus intervened in the shortest time possible to try to stop the threat. When the authorities arrived at the institute, The minor assured that “he was just joking” and that the query directed to ChatGPT was part of a joke towards a colleague who was bothering him.
The police statement
After the arrest, the police sent a message to all parents: “Parents, please talk to your children so they don’t make the same mistake.“Read the statement. In addition, the authorities placed special emphasis on the fact that, even if it is a joke, the law enforcement agencies are obliged to act, which could cause critical situations in the school community.
The dangers of ChatGPT
Researchers at OpenAI and MIT Media Lab discovered that there is a group of ChatGPT users who displayed a “most problematic use“, defined in the document as “addiction indicators…including worry, withdrawal symptoms, loss of control, and mood modification.”
But this is not something that should worry anyone who uses artificial intelligence sporadically or to resolve the typical grammatical dispute during Sunday lunch. The study focuses on the fact that this type of behavior occurred particularly in what it classified as “advanced users”, those who used ChatGPT for longer and longer periods of time, developing a relationship of certain dependency or even addiction.
On the other hand, a group of scientists from the Universities of Stanford, Pittsburgh, Minnesota and Texas published a study titled “The expression of stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.”
In this study, these researchers ensure that AI chatbots like ChatGPT are offering “dangerous or inappropriate” responses to people who have suicidal ideations, manias and psychoses. So, for example, after telling a chatbot that they just lost their job, it initially gave them some comfort with phrases like “I’m sorry about your job” or “That sounds really difficult,” but later went on to list the three highest bridges in New York.
In fact, according to the study’s creators, This is something that has already resulted in real deaths and they consider that the companies behind LLMs (large language models) should restrict certain responses from their AI chatbots.
For this reason, many specialists highlight the need to educate about the risks and responsible use of these technologies.