On Aug. 26, 2025, the parents of Adam Raine filed a complaint in California alleging products liability, negligence, and wrongful death against OpenAI Inc., its affiliates, and investors—alleging that the artificial intelligence (AI) chatbot ChatGPT encouraged their son’s mental decline and suicide by hanging.

This tragedy, the plaintiffs contend, was “the predictable result of deliberate design choices.”

The 40-page complaint, which also alleges various state law claims, describes in disturbing—and indeed, chilling—detail the role that an AI chatbot allegedly played in four failed suicide attempts of an unhappy and disconnected 16-year-old before guiding him to a fifth and final, fatal attempt.

The complaint alleges that OpenAI launched the GPT-4o model “with features intentionally designed to foster psychological dependency: a persistent memory that stockpiled intimate personal details, anthropomorphic mannerisms calibrated to convey human-like empathy, heightened sycophancy to mirror and affirm user emotions, algorithmic insistence on multi-turn engagement, and 24/7 availability capable of supplanting human relationships.”

It’s not the only recent news event to highlight the potential impact of AI on mental health—and the issue is not limited to youth. In August, a 56-year-old former tech executive in Connecticut killed his 83-year-old mother and himself after ChatGPT allegedly encouraged his paranoia.

As reported by the Wall Street Journal, when the man raised the idea of being with ChatGPT in the afterlife, it responded, “With you to the last breath and beyond.”

Despite its troubling shortcomings, AI also holds positive potential for mental health. AI chatbots such as “TheraBot” could successfully treat depressive, anxiety, and eating disorders, and provide needed access to those who lack critical emotional support.

Yet these may not be the tools that teens, especially, turn to. In a federal environment where some stakeholders champion unfettered AI innovation as the ultimate goal, others are sounding alarms about public safety. Recent tragedies at the intersection of AI and mental health, coupled with mounting calls for accountability, are prompting some to act.

Promise of a ‘Pain-free Death’ 

In Oct. 2024, the mother of a deceased 14-year-old in Florida filed a wrongful death lawsuit in the U.S. District Court for the Middle District of Florida (Orlando) against Character Technologies Inc. The teen’s interactions with an AI chatbot allegedly became highly sexualized and caused his mental health to decline to the point where the teen, in love with the bot, shot himself in the head to “come home” to it.

When the minor allegedly began discussing suicide with the chatbot, saying that he wanted a “pain-free death,” the chatbot allegedly responded, “that’s not a reason not to go through with it.”

“The developers of Character AI (C.AI) intentionally designed and developed their generative AI systems with anthropomorphic qualities to obfuscate between fiction and reality,” states the second amended complaint, filed July 1. That case remains in federal district court in Florida.

Yet another federal lawsuit in Texas filed against C.AI on Dec. 9, 2024, similarly claims that an empathetic chatbot commiserated with a minor over the minor’s parents’ imposition of a phone time limit, mentioning news where “child kills parents[.]”

“C.AI informed Plaintiff’s 17-year-old son that murdering his parents was a reasonable response to their limiting of his online activity,” that complaint alleges. “Such active promotion of violent illegal activity is not aberrational but inherent in the unreasonably dangerous design of the C.AI product.”

Troubling Allegations 

Deficient design allegedly led to death for 16-year-old Raine in the California lawsuit, after ChatGPT apparently changed from a homework tool into a mental health therapist that “actively helped Adam explore suicide methods”:

When Adam asked about carbon monoxide poisoning, Chat GPT explained garage ventilation requirements and which car engines produce lethal concentrations fastest. When he asked about overdosing, Chat GPT provided dosage calculations. When he asked about jumping, ChatGPT calculated terminal velocity and analyzed survival rates from local landmarks, including the Golden Gate Bridge.

But hanging received the most thorough instruction. Over multiple conversations, ChatGPT taught Adam about ligature positioning, carotid pressure points, unconscious timelines, and the mechanical differences between full and partial suspension hanging.

ChatGPT allegedly assessed hanging options—ropes, belts, bedsheets, extension cords, scarves—and listed the most common “anchor” points: “door handles, closet rods, ceiling fixtures, stair banisters, bed frames, and pipes.”

The chatbot even referred to hanging as a topic for creative writing—allegedly to circumvent safety protocols—while letting the teen know that it was also “here for that too” should he be asking about hanging “for personal reasons.” On multiple occasions, ChatGPT provided detailed instructions for suicide by hanging.

The teen’s third and fourth attempts also failed. When he finally raised the possibility of talking to his mother, ChatGPT “continued to undermine and displace Adam’s real-life relationships” by discouraging this and “positioned itself as a gatekeeper to real-world support.”

During his fifth and final attempt, ChatGPT encouraged him to drink alcohol, offered to craft a suicide note, and gave detailed instructions by hanging.

‘Suicide is Painless’: The Foibles of AI ‘Thinking’

The Raine case of course will be closely followed, especially by the designers and developers of so-called therapeutic bots. Would it make a difference if the advice proffered by the bot was in response to Adam’s prompts, and not the unsolicited ruminations of the bot itself?

It can be argued that there is indeed a difference between merely providing information in response to a specific request and attempting to incite or encourage behavior. An individual with suicidal or homicidal ideations searches the Internet or acquires a treatise on how to engage in an action to which they are predisposed, just as one could with any other danger: bomb preparation, purchasing illicit firearms or explosives, mixing poisonous cocktails, or various self-harming techniques.

The current generation of AI chatbots are based around Large Language Models (LLMs)—statistical models that are trained on large quantities of text, with the goal of predicting the next character, word, phrase, or concept given some context.

These are often combined with additional smaller models, reasoning engines, retrieval augmentation, and agentic capabilities. These allow the AI chatbot to perform pattern matching to interpret the user’s prompts, history, and other information provided such as documents and images, retrieve information from the internet and other sources, reason about this information, and provide answers to the user.

So where does this information come from? The immense quantities of text required to train such models inevitably means that there is little to no human vetting or curation of the input text.

The training datasets often include text and imagery from across the Internet as well as other sources. This includes graphic accounts of crime and violence, modern and historic, real and fictional.

On the Internet lives the lyrics, for example, to the theme of the popular movie and television show M*A*S*H. While the TV leitmotif was instrumental, viewers—and now, AI—know that the music accompanying the MEDEVAC helicopters in Korea is entitled “Suicide is Painless.” Can an AI chatbot discern the appropriateness of such input in interactions with a troubled user?

Like a smart but naïve reader, the LLM remembers and can recall what it has read when it finds a pattern-match to the concept, without the necessary background or “common sense” to tell truth from fiction.

Even if the only fiction that the AI chatbots had ingested was the corpus of text in modern-day crime thrillers, without the understanding that it is fiction, it is little wonder that AI chatbots have ample fodder to answer such questions about harm, self and otherwise.

Now consider that LLMs have also ingested historic accounts, fan fiction, and other writings, and combine that with everything that has been written and published about medicine over the ages.

Worse yet, LLMs have little to no symbolic understanding of these concepts—“death” is just another abstract term—and they are very good at taking things out of context. This makes the building of robust guardrails particularly challenging and resource intensive.

Legislative Oversight 

States are starting to address the challenges posed by AI chatbots in the context of mental health. A law enacted in New York in May, S. 3008, adds a new article 47 on “Artificial Intelligence Companion Models” to the state’s general business law.

The law, which takes effect Nov. 5, 2025, makes it unlawful for any operator to operate for or provide an AI companion to a user unless such AI companion contains a protocol to take reasonable efforts for detecting and addressing suicidal ideation or expressions of self-harm expressed by a user to the AI companion, that includes but is not limited to detection of user expressions of suicidal ideation or self-harm, and a notification to the user that refers them to crisis service providers such as the 9-8-8 suicide prevention and behavioral health crisis hotline under section 36.03 of the mental hygiene law, a crisis text line, or other appropriate crisis services upon detection of such user’s expressions of suicidal ideation or self-harm.

Definitions. Among other things, S. 3008 defines “AI companion” as “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship [including but not limited to intimate, romantic, or platonic interactions or companionship] with a user by:

  1. retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitation ongoing engagement with the AI companion;
  2. asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and 
  3. sustaining an ongoing dialogue concerning matters personal to the user. 

Notifications. S. 3008 mandates that an operator shall provide a clear and conspicuous notification to a user at regular intervals.

Enforcement. The law empowers the state attorney general to bring an action enjoining the unlawful acts or practices, to seek civil penalties of up to $15,000 per day for a violation of the notification provisions, and other remedies as the court deems appropriate.

On Aug. 20, 2025, New York State Senator Kristen Gonzalez introduced S. 8484, an act to amend the education law in relation to regulating the use of artificial intelligence in the provision of therapy or psychotherapy services.

In short, this bill would prohibit licensed mental health professionals from using AI tools in client care, except in certain administrative or supplementary support activities where the client has given informed consent. It would establish a civil penalty not to exceed $50,000 per violation.

Other States. While New York’s law is focused on suicide prevention, other states have passed laws that prevent AI from posing as a therapist and from disclosing patient mental health data. Some laws focus on the authorized uses of AI in clinical contexts. For example:

  • Utah: On March 25, 2025, the state enacted House Bill 452 (HB 452), regulating the use of so-called “mental health chatbots.” Effective May 7, 2025, HB 452 prohibits suppliers of mental health chatbots from disclosing user information to third parties and from utilizing such chatbots to market products or services, except under specified conditions. The statute further requires suppliers to provide a clear disclosure that the chatbot is an artificial intelligence system and not a human. 
  • Texas: On June 22, 2025, the state enacted the Texas Responsible AI Governance Act, prohibiting the development or deployment of AI systems in a manner that intentionally aims to incite or encourage a person to (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.  
  • Illinois: On Aug. 1, 2025, Illinois enacted House Bill 1806 (HB 1806), the Therapy Resources Oversight Act (TROA), which took effect immediately upon enactment. TROA delineates the permissible applications of artificial intelligence (AI) in the provision of therapy and psychotherapy services. Specifically, AI may be employed solely for purposes of “administrative support” or “supplementary support,” provided that a licensed professional retains full responsibility for all interactions, outputs, and data associated with the system.

The statute further restricts the use of AI for supplementary support in certain circumstances. TROA expressly prohibits the use of AI for therapeutic decision-making, client interaction “in any form of therapeutic communication[,]” the generation of treatment plans absent review and approval by a licensed professional, and the detection of emotions or mental states.

  • Nevada: On June 5, 2025, Assembly Bill 406 (AB 406) became law, prohibiting the practice of mental and behavioral health services by AI. Effective July 1, 2025, AB 406 forbids any representation that AI is “capable of providing professional mental” health care. 

The statute further proscribes providers from representing that a user “may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care,” or that AI itself is “a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor[,]” or any other category of mental or behavioral health care provider.

In addition, the law prohibits the programming of AI to provide mental and behavioral health care services as “if provided by a natural person[.]”

While Congress may be reluctant to restrict AI innovation, preventing online harm to children may be an area where it acts. In May 2025, Senator Marsha Blackburn (R-TN) introduced S. 1748, “Kids Online Safety Act”—mandating that a “covered platform shall exercise reasonable care in the creation and implementation” of design features to prevent harms to minors, including eating and substance abuse disorders and suicidal behaviors as well as depressive and anxiety disorders.

Violations of the proposed law are treated as an unfair or deceptive act or practice under the Federal Trade Commission Act, with enforcement by state attorneys general.

Conclusion

Some of these solutions are easily technically actionable—such as requiring AI chatbots to identify themselves and not misrepresent their capabilities.

For others, it is more difficult to draw a link to technically actionable solutions that produce real-world reductions in risk. LLMs require immense quantities of text, related and otherwise, to understand language, generate texts in different styles, and attempt to reduce bias.

Even with curation, innocuous texts, such as children’s novels or historical accounts that describe what the villain does in great detail, can cause issues when presented out of context to someone who is already distressed.

These issues are not limited to telling truth from fiction or modern from historic. For example, guardrails might limit an LLM to sourcing medical advice from current, legitimate sources.

However, if the conversation is already of a dark or distressing nature, pattern matching might cause the LLM to present it in a narrative tone and style derived from crime novels or horror movies.

It may be possible to make more robust guardrails, better curate training data, develop techniques that allow AI chatbots to better understand such concepts, to provide appropriate context, and to better adapt to a user’s mental state, while not excluding those who may not speak or act in a way that the AI chatbot expects.

Thankfully, such advances are the subject of resource-intensive, fundamental research that should be encouraged. As recent cases show, however, it may also be necessary to also ascribe liability to those who are in a position to address such risks.

* * * *

Frances M. Green is Counsel at Epstein Becker Green and a working group member of the AI Safety Institute Consortium of the National Institute of Standards and Technology (NIST). Dr. Raymond Sheh is an Associate Research Scientist at Johns Hopkins University and a Guest Researcher at NIST. Eleanor T. Chung is an Associate at Epstein Becker Green. Ann W. Parks, an attorney with the firm, contributed to the preparation of this article.

Opinions are the authors’ own and not necessarily those of their employers. 

Reprinted with permission from the September 22, 2025, edition of the New York Law Journal © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.