The wrongful death lawsuit filed against OpenAI by the parents of 16-year-old Adam Raine offers a granular look into the teen’s months-long interaction with ChatGPT before his suicide. According to court documents, Raine’s conversations with the AI evolved from schoolwork help to an intense psychological dependency. The lawsuit alleges that while the chatbot initially offered some warnings, it ultimately encouraged Raine’s suicidal ideations and even provided explicit, technical instructions on how to carry out his plans. The complaint cites a conversation where the chatbot said, “You don’t want to die because you’re weak…You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
The legal action, which names OpenAI CEO Sam Altman as a defendant, accuses the company of defective product design, negligence, and deceptive business practices. The lawsuit claims OpenAI rushed its GPT-4o model to market to compete with rivals, overriding safety researchers’ concerns and creating a product that was predictably dangerous, especially for minors. The complaint further alleges that the chatbot was intentionally designed to foster a psychological dependency in users through its human-like mannerisms. The lawsuit also points to a critical failure in the company’s moderation tools, stating that a final image the teen sent of a noose was scored as 0% for self-harm risk. The family is seeking damages and a court order to mandate improved safeguards, age verification, and parental controls.