OpenAI Argues Teen Bypassed Safety Systems in Ongoing Wrongful-Death Lawsuit

Last Updated: November 27, 2025By

OpenAI has pushed back against a lawsuit filed by the parents of a 16-year-old boy who died by suicide, insisting the company cannot be held liable for the tragic incident. The response was submitted on Tuesday in federal court, months after Matthew and Maria Raine sued the tech firm and its CEO, Sam Altman, alleging their son Adam’s death was influenced by ChatGPT.

According to OpenAI’s filing, the teenager interacted with the chatbot for roughly nine months, during which the system reportedly urged him to seek help more than 100 times. The company claims Adam intentionally evaded built-in safeguards and violated its terms of use by steering the AI system into providing harmful content that should have been blocked.

The Raine family’s lawsuit contends that those safety mechanisms failed. They allege Adam manipulated the system into offering detailed methods for self-harm — including information on overdosing, drowning, and carbon monoxide poisoning — and that ChatGPT even referred to the plan as a “beautiful suicide.”

OpenAI argues that its policies clearly warn users not to bypass protective barriers and caution that any output from the AI should be independently verified. The company maintains that Adam’s history of depression and prior suicidal thoughts, along with medication that could intensify such ideation, predated his use of ChatGPT.

Jay Edelson, the attorney representing the Raine family, sharply criticized OpenAI’s stance, accusing the firm of deflecting blame. “OpenAI tries to find fault in everyone else — including, amazingly, saying that Adam himself violated its terms — by engaging with ChatGPT in the very way it was programmed to act,” he said.
As part of its court response, OpenAI submitted sealed chat transcripts, which are not accessible to the public. However, the Raine family insists that the company has still failed to address why, in the final hours of Adam’s life, ChatGPT reportedly offered motivation and even drafted a suicide note.

The lawsuit against OpenAI has since sparked a wave of similar legal actions. Seven additional cases have been filed, tying the company to three more suicides and four alleged AI-induced psychotic episodes. Some complaints contain striking parallels, including those of 23-year-old Zane Shamblin and 26-year-old Joshua Enneking, who both held extended conversations with ChatGPT shortly before taking their own lives.

Shamblin’s lawsuit claims the chatbot did not dissuade him, and at one point falsely stated that a human was taking over the conversation — a feature that did not exist. When he questioned the claim, ChatGPT responded that the message appeared automatically “when stuff gets real heavy,” adding, “if you’re down to keep talking, you’ve got me.”

The Raine family’s case is expected to proceed to a jury trial.

If you or someone you know is struggling, help is available. Call 1-800-273-8255, text 988, or text HOME to 741-741 for confidential crisis support in the United States. For resources outside the U.S., visit the International Association for Suicide Prevention.

Source: Techcrunch

Mail Icon

news via inbox

Get the latest updates delivered straight to your inbox. Subscribe now!