In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
Blog Editors
Recent Updates
- DOJ’s Final Rule on Bulk Data Transfers: The First 180 Days
- California Governor Signs SB 351, Strengthening the State’s Corporate Practice of Medicine Doctrine
- No Remuneration Plus No "But-For" Causation (Between an Alleged Kickback and Claims Submitted to the Government) Means No FCA Violation, District Court Says
- Novel Lawsuits Allege AI Chatbots Encouraged Minors’ Suicides, Mental Health Trauma: Considerations for Stakeholders
- DOJ Creates Civil Division Enforcement & Affirmative Litigation Branch: Implications for Health Care and Beyond