In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.

It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.

While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.

A recent article in Modern Healthcare reported that the increased scrutiny of AI chatbots is not preventing digital health companies from investing in AI development to meet the rising demand for mental health tools. Yet the issue of AI and mental health encompasses not only minors, developers, and investors but also health care providers, therapists, and employers in all industries, including health care. On October 1, 2025, a coalition of leaders from academia, health care, tech, and employee benefits announced the formation of an AI in Mental Health Safety & Ethics Council, a cross-disciplinary team advancing the development of universal standards for the safe, ethical, and effective use of AI in mental health care. Existing lawsuits from parents are demonstrating various avenues for liability in a broad range of contexts, and the seriousness of those lawsuits may prompt Congress to act. In this post, we explore some of the many unfolding developments.

The Lawsuits: Three Examples

Cynthia Montoya and William Peralta’s lawsuit, filed in the U.S. District Court for the District of Colorado on September 15, alleges that defendants including Character Technologies, Inc. marketed a product that ultimately caused their daughter to commit suicide by hanging within months of opening a C.AI account. They allege claims including strict product liability (defective design); strict liability (failure to warn); negligence per se (child sexual abuse, sexual solicitation, and obscenity); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of the Colorado Consumer Protection Act.

Matthew and Maria Raine’s lawsuit, filed in California Superior Court, County of San Francisco, on August 26, alleges that defendants including OpenAI, Inc. created a product, ChatGPT, that helped their 16-year-old son commit suicide by hanging. The Raines allege claims including strict liability (design defect and failure to warn); negligence (design defect and failure to warn); violation of California’s Business and Professional Code, Unfair Competition Law, and California Penal Code (criminalizing aiding, advising, or encouraging another to commit suicide); and wrongful death and survivorship.

Megan Garcia filed suit in U.S. District Court for the Middle District of Florida (Orlando) in October 2024 against Character Technologies Inc. and others, claiming that her son’s interactions with an AI chatbot caused his mental health to decline to the point where the teen committed suicide to “come home” to the bot. An amended complaint filed in July 2025 alleges strict product liability (defective design); strict liability (failure to warn); aiding and abetting; negligence per se (sexual abuse and sexual solicitation); negligence (defective design); negligence (failure to warn); wrongful death and survivorship; unjust enrichment; and violations of Florida’s Deceptive and Unfair Trade Practices Act.

Congressional Scrutiny

The Montoya/Peralta lawsuit appeared the same week as a September 16, 2025, hearing of the U.S. Senate Judiciary Committee on “Examining the Harm of AI Chatbots.” The panel included Matthew Raine and Megan Garcia as well as “Jane Doe,” a mother from Texas who filed suit in December 2024 alleging that her son used a chatbot suggesting that “killing us, his parents, would be an understandable response to our efforts [to limit] his screen time.”

Senator Josh Hawley (R-MO), who chairs the U.S. Senate Subcommittee on Crime and Counterterrorism and who conducted the hearing, took the issue seriously:

The testimony that you are going to hear today is not pleasant. But it is the truth and it’s time that the country heard the truth. About what these companies are doing, about what these chatbots are engaged in, about the harms that are being inflicted upon our children, and for one reason only. I can state it in one word, profit.

Representatives from certain companies that develop AI chatbots reportedly declined the invitation to appear at the congressional hearing or to send a response.

Potential FDA and FTC Oversight

On September 11, 2025, the Food and Drug Administration (FDA) announced that a November 6 meeting of its Digital Health Advisory Committee would focus on “Generative AI-enabled Digital Mental Health Medical Devices.” FDA is establishing a docket for public comment on this meeting; comments received on or before October 17, 2025, will be provided to the committee.

Although FDA has reviewed and authorized certain digital therapeutics, generative AI products currently on the market have generally not been subject to FDA premarket review and are not subject to quality system regulations governing product design and production, or postmarket surveillance requirements. Were FDA to change the playing field for these products, it could have a major impact on access to these products in the U.S. market, producing substantial headwinds (e.g., barriers to market entry) or tailwinds (e.g., enhancing consumer trust, and competitive benefits for FDA-cleared products), depending on your point of view.

All stakeholders (practitioners, software developers and innovators, investors, and the public at large) should be paying close attention to FDA developments and considering how to effectively advocate for their points of view. Innovators also should be thinking about how to future-proof themselves against major disruptions due to (very likely) regulatory changes by, for example, building datasets substantiating product value to individuals, implementing procedures and processes to mitigate risks being introduced through product design, and adopting strategies to identify and address emergent safety concerns. If products’ regulatory status is called into doubt or clearly changes in the future, these steps can help innovators be prepared to address their products with FDA if they are contacted.

The FTC announced its own inquiry on September 11, issuing orders to seven companies providing consumer-facing AI chatbots to provide information on how those companies measure, test, and monitor potentially negative impacts of this technology on children and teens. The inquiry “seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products.”

The timing here is not coincidental. FDA and FTC routinely coordinate on enforcement of laws concerning consumer (nonprescription) products and will likely be considering how to most efficiently implement changes to regulation.

Federal Legislative Efforts

Federal legislators recently introduced bills to prevent harm to minors’ mental health due to AI chatbots; these proposals highlight enforcement by the Federal Trade Commission (FTC) and the state attorneys general. Key federal bills include:

  • 2714: The CHAT Act would require AI chatbots to implement age verification measures and also to establish certain protections for minor users. The legislation includes, among other things, a requirement of verifiable parental consent before allowing a minor to access and use the companion AI chatbot; immediate notice to the parent of any interaction involving suicidal ideation; and blocked access to any companion AI chatbot that engages in sexually explicit communication. Notice would be required every 60 minutes that the user is not engaging with a human. A covered entity—defined as “any person that owns, operates, or otherwise makes available a companion AI chatbot to individuals in the United States” — would be required to monitor companion AI chatbot interactions for suicidal ideation. Violations of S. 2714 would be enforced by the FTC or through civil actions by the attorneys general of the states.
  • R. 5360: This legislation would direct the FTC to develop and make available to the public educational resources for parents, educators, and minors with respect to the safe and responsible use of AI chatbots by minors.

State Legislative Efforts

States including Utah, California, Illinois, and New York have already undertaken legislative efforts relating to AI and mental health, seeking to impose obligations on developers and clarifying permissible applications of AI in mental health therapy (see a summary by EBG colleagues here). New York’s S. 3008, “Artificial Intelligence Companion Models,” takes effect November 5. It defines “AI companion” as an AI “designed to simulate a sustained human or human-like relationship with a user” that facilitates “ongoing engagement” and asks “unprompted or unsolicited emotion-based questions” about “matters personal to the user.” The bill also defines “human relationships” as those that are “intimate, romantic or platonic interactions or companionship.” The AI companion must have a protocol for detecting “user expressions of suicidal ideation or self harm,” and it must notify the user of a suicide prevention and behavioral health crisis hotline. The AI must also provide notifications at the beginning of any interaction, and throughout the interaction—at least every three hours—that state that the user is not communicating with a human.

On September 22, 2025, the California legislature presented to the governor for signature SB 243, Companion Chatbots, which would amend the Business and Professions Code. If signed, this law will take effect July 1, 2027. The law closely tracks New York’s law: it requires the AI to provide notifications every three hours that the user that it is not human, and it also requires protocols to detect suicidal ideation. Interestingly, this law provides a private right of action for injunctive relief, damages of up to $1,000 per violation, and attorney’s fees and costs.

Illinois HB 1806, the Therapy Resources Oversight Act, took effect on August 1, 2025. It is designed to ensure that therapy or psychotherapy services are delivered by qualified, licensed or certified professionals and to protect consumers from unlicensed or unqualified providers, including unregulated AI systems. AI use by a licensed professional is permitted when assisting in providing “supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs and data systems.” The new law prohibits an individual, corporation, or entity from providing or advertising, offering therapy or psychotherapy services, including through Internet-based AI, unless the services are conducted by a licensed professional. A proposed law in New York, S. 8484, would also prohibit licensed mental health professionals from using AI tools in client care, except in administrative or supplemental support activities where the client has given informed consent.

Other comprehensive state laws relating to AI and consumer protection, such as the impending law in Colorado, may also be implicated in the context of AI chatbots and mental health.

Takeaways for the Health Care Industry (Including Health Care Employers)

The issues surrounding AI mental health chatbots, potential liability, and increasing probability of regulatory actions continue to develop quickly—against a federal backdrop of fostering AI innovation. Developers and investors should already be following the cases and laws in this area. Health care providers and social workers should familiarize themselves with the specific laws that could affect them as practitioners, and with chatbot apps they recommend or use, as well as data protection issues. We add here that more employers are offering mental health chatbots to employees, which could raise liability concerns:

  • Risk of misdiagnosis or inappropriate treatment. If the bot’s algorithms are flawed or its responses inadequate, and an employee suffers harm, the employer could face claims of negligence for selecting or deploying an inadequate therapeutic tool. Courts may find that employers assumed a duty of care by offering what employees reasonably perceived as mental health treatment.
  • Privacy and data security. Employees may disclose sensitive information about mental health conditions, trauma, substance use, or other protected health information. If this data is breached or used inappropriately, employers could face lawsuits under the Health Insurance Portability and Accountability Act, state privacy statutes, or disability discrimination laws like the Americans with Disabilities Act.
  • Practice of medicine. Employers must consider whether they are practicing medicine without proper licensing or credentials, which could trigger regulatory action or professional liability claims—especially if the bots cross the line from general wellness into clinical mental health treatment.
  • Voluntary consent. Employees may feel coerced into using these bots, particularly if participation is tied to health insurance benefits or workplace wellness incentives.

The issues concerning the safety and security of wellness bots and various therapeutic AI modalities continue to evolve. The EBG team will continue to monitor these developments and provide updates over time. Should you have questions, please reach out to the authors.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.