OpenAI defended itself in a lawsuit that accused ChatGPT of encouraging a 16-year-old high school student in California to take his own life, saying the chatbot repeatedly told him to seek help.
In a court filing on Tuesday, the company described the death of Adam Raine as a tragedy but said it “was not caused by ChatGPT”, as Adam’s conversation with the chatbot showed he had long struggled with suicidal thoughts before he ever used the tool.
According to Bloomberg, citing a filing in the San Francisco Superior Court, Adam told the chatbot he had been dealing with “multiple significant risk factors for self-harm” for several years, including recurring suicidal thoughts and ideations.
OpenAI’s lawyers said that ChatGPT directed him to reach out to crisis hotlines or trusted individuals “more than 100 times.”
It also said in the filing that Adam told the chatbot he had already reached out to people close to him in the weeks before his death, but his cries for help were ignored.
In August, the Raine family filed the lawsuit against OpenAI and OpenAI CEO Sam Altman for wrongful death, product liability and negligence. They claimed the chatbot guided Adam on tying a noose and even offered help drafting a suicide note.
The Rain family’s filing states Adam’s mom has found him with the “exact noose and partial suspension setup” that the lawsuit claims “ChatGPT had designed for him.”
The lawsuit led OpenAI to introduce new controls to let parents limit how teenagers use the chatbot and receive alerts if it detects signs of distress.
In a statement, Raine family lawyer Jay Edelson called OpenAI’s filing “disturbing” and said the company was trying to “find fault in everyone else”, even suggesting that the teenager violated its terms and conditions by engaging with the chatbot in the very way it was designed to respond.
The case is Raine v OpenAI Inc, CGC25628528, California Superior Court, San Francisco County. /TISG
