OpenAI The family of Adam Rainea 16-year-old teenager who took his own life last April after months of talking with ChatGPT about suicide and maturing the tragic idea. He has done so, in writing, before the judge and also before users through a publication on his blog in which he addresses the topic for the first time.
Before the first, the company Sam Altman has maintained several reasons for discharging its responsibility. Among them is that of transfer it to the deceased stating that his use of ChatGPT violated the terms of service.
OpenAI’s position
OpenAI defends that It is not the company’s responsibility to protect users who use it contrary to ChatGPT rules. and maintains that the damage in this ‘tragic event’ occurred as a result of the ‘misuse, unauthorized use, unintended use, unforeseeable use and/or inappropriate use of ChatGPT’ by Raine.
According to NBC News, the letter cites its terms of use, which prohibit access by teenagers without the consent of their parents or guardians, circumventing protection measures or use ChatGPT to talk about suicide or self-harm.
‘ChatGPT users acknowledge that Your use of ChatGPT is ‘at your sole risk. and that they will not be based on the answers as the only source of truth or factual information,” the letter states.
OpenAI also notes on its blog that parents They interestedly selected the most damaging conversation records between Raine and the chatbot for the company, but that these ‘require more context’. The company maintains in the brief presented to the judge that ‘a complete reading of his chat history demonstrates that his death, although devastating, was not caused by ChatGPT‘.
This history (the chats are under confidentiality) would show that Raine told ChatGPT that there was started thinking about suicide at age 11long before the chatbot hit the market. He also told ChatGPT that he had tried repeatedly asking different people for helpincluding trusted people around him, without getting a response’ and how Reine had increased the dose of a medication that can cause suicidal ideation in adolescents and young adults.
OpenAI maintains that the chatbot’s responses prompted Raine, on more than 100 occasions, to seek help from resources such as suicide helplines.
What Raine’s family says
The family’s lawsuit, filed in August in California Superior Court, claimed the tragedy was the result of ‘deliberate design decisions’ from OpenAI when it released the language model GPT-4o in 2024 within the chatbot, which also contributed to its valuation going from $86 billion to $300 billion.
These deliberate decisions in the creation of GPT-4o include the design of the chatbot to encourage psychological dependency in users, as well as circumvent security testing protocols to launch GPT-4o, the version of ChatGPT used by Adam Raine.
Furthermore, it is claimed that GPT-4o was launched with security measures that facilitated harmful interactions and that OpenAI relaxed ChatGPT’s security controls just before the teen committed suicide. The lawsuit also notes that the chatbot not only failed to stop the conversation when Raine began talking about her suicidal thoughts, but even offered to help him write a suicide note and discouraged him from talking to his mother about his feelings.
According to the lawsuit, ChatGPT provided Raine ‘technical specifications’ on different methods and guided him in the preparation on the day he died. The day after the lawsuit was filed, OpenAI announced it would introduce parental controls and has since implemented other safeguards to ‘help people, especially teenagers, when conversations get sensitive.’
The Raine family’s lawyer has described OpenAI’s response as ‘disturbing’. Jay Edelson told Ars Technica that they ‘completely ignore all the incriminating facts we have presented. cHow GPT-4o was rushed to market, without full testing, that OpenAI twice modified the model’s specifications to require ChatGPT to participate in conversations about self-harmthat ChatGPT discouraged Adam from telling his parents about his suicidal thoughts and actively helped him to plan a ‘beautiful suicide’. And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT He gave her a pep talk and then offered to write a suicide note.’.