In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
Blog Editors
Recent Updates
- Health Care Without the Hospital: ChatGPT Health and Claude Go Direct to Consumers
- The HTI-5 Proposed Rules: ASTP/ONC’s Cleanup and the Hard Work that Lies Ahead
- Just Released: Telemental Health Laws – Download Our Complimentary Survey and App
- OIG Limits Sign-On Bonuses to In-Home Family Caregivers
- Governing Health AI Development and Adoption: Insights from HHS’s Recently Announced Strategy to Promote AI in Healthcare