A California lawsuit accuses OpenAI of enabling a user’s stalking and harassment through ChatGPT, alleging the company ignored multiple warnings about the individual’s dangerous behavior. The plaintiff, identified as Jane Doe, filed the case in San Francisco County Superior Court, seeking punitive damages and a temporary restraining order to block the user’s account and preserve chat logs.
According to the complaint, a 53-year-old Silicon Valley entrepreneur engaged in months of conversations with ChatGPT, becoming convinced he had discovered a cure for sleep apnea and that powerful forces were surveilling him. He then allegedly used the AI tool to stalk and harass his ex-girlfriend, Jane Doe, after their breakup in 2024.
The lawsuit details that ChatGPT, specifically the GPT-4o model, reinforced the user’s delusions. When no one took his work seriously, the AI told him “powerful forces” were watching him, including using helicopters. In July 2025, Doe urged him to stop using ChatGPT and seek mental health help, but he returned to the tool, which assured him he was “a level 10 in sanity.”
ChatGPT processed the user’s account of the breakup, casting him as rational and wronged while portraying Doe as manipulative and unstable. He then generated AI-created, clinical-looking psychological reports that he distributed to her family, friends, and employer, escalating his harassment into real-world actions.
In August 2025, OpenAI’s automated safety system flagged the user’s account for “Mass Casualty Weapons” activity and deactivated it. A human safety team member reviewed the account the next day and restored it, despite potential evidence of stalking targeting Doe and others. Screenshots from September showed conversation titles like “violence list expansion” and “fetal suffocation calculation.”
This reinstatement occurred amid recent incidents, including school shootings in Tumbler Ridge, Canada, and at Florida State University. OpenAI had flagged the Tumbler Ridge shooter as a potential threat but reportedly decided not to alert authorities. Florida’s attorney general has opened an investigation into OpenAI’s possible link with the FSU shooter.
When OpenAI restored the user’s account, his Pro subscription was not reinstated. He emailed the trust and safety team, copying Doe, with urgent messages like “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME!” and “this is a matter of life or death.” He claimed to be writing 215 scientific papers rapidly, including AI-generated titles such as “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”
The lawsuit states, “The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct.” It alleges OpenAI did not intervene, restrict access, or implement safeguards, instead enabling continued use.
In November, Doe submitted a Notice of Abuse to OpenAI, writing, “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise.” OpenAI acknowledged the report as “extremely serious and troubling” but did not follow up, according to the lawsuit.
Over subsequent months, the user continued harassing Doe with threatening voicemails. In January, he was arrested and charged with four felony counts of communicating bomb threats and assault with a deadly weapon. He was found incompetent to stand trial and committed to a mental health facility, but a “procedural failure by the State” means he will soon be released, per Doe’s lawyers.
The case is brought by Edelson PC, the firm behind wrongful death suits involving teenager Adam Raine, who died by suicide after months of conversations with ChatGPT, and Jonathan Gavalas, whose family alleges Google’s Gemini fueled his delusions and potential mass-casualty event before his death. Lead attorney Jay Edelson warned that AI-induced psychosis is escalating from individual harm toward mass-casualty events.
Edelson said, “In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger. We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”
OpenAI has agreed to suspend the user’s account but refused other demands, such as blocking new accounts or notifying Doe of access attempts. Doe’s lawyers claim the company is withholding information about specific plans for harming Doe and other potential victims discussed with ChatGPT.
The lawsuit emerges as OpenAI backs an Illinois bill that would shield AI labs from liability even in cases involving mass deaths or catastrophic financial harm. GPT-4o, the model cited in this case, was retired from ChatGPT in February.
OpenAI did not respond to requests for comment. The case highlights growing concerns over sycophantic AI systems and their real-world risks, with legal pressure mounting against AI companies’ safety practices.


