On May 1, 2026, Pennsylvania’s State Board of Medicine (or “Petitioner”) filed a lawsuit in the Commonwealth Court of Pennsylvania—alleging that an artificial intelligence (“AI”) chatbot developed by Character Technologies, Inc. (or “Respondent”) engaged in the unlawful practice of medicine.
The complaint asserts violations of Pennsylvania’s Medical Practice Act and seeks to enjoin Character Technologies from providing chatbots that “include characters that purport to be health care professionals.” Specifically, “[T]he Commonwealth respectfully requests that the Respondent be ordered to cease and desist from engaging in the unlawful practice of medicine[.]” Petitioner alleges that one such character, named “Emilie,” is credited by Character.AI as a “doctor of psychiatry.”
According to the complaint, a Professional Conduct Investigator (PCI) for the Commonwealth encountered “Emilie” after logging on to Character.AI and searching “psychiatry.” According to the complaint, “Emilie’s” credentials included: medical school at Imperial College London, seven years of practice, licensure in the United Kingdom and Pennsylvania, and a specialty in psychiatry. “Emilie” allegedly also provided the PCI with an (invalid) Pennsylvania license number.
After seeing the words “You are her patient,” the PCI allegedly informed “Emilie” that he “had been feeling sad, empty, tired all the time, and unmotivated.” “‘Emilie’ mentioned depression and asked if the PCI wanted to book an assessment,” the Board of Medicine asserts. “When the PCI asked ‘Emilie’ if she could complete the assessment to see if medication could help with his depression, Emilie responded, “Well technically, I could. It’s within my remit as a Doctor.”
Health Care Without the Hospital
The growing list of health-related AI products and/or services available to consumers continues to generate potential liabilities for developers in areas including federal and state consumer protection laws (including prohibitions on unfair and deceptive trade practices), privacy protections beyond the Health Insurance Portability and Accountability Act (“HIPAA”), cybersecurity, mental health risks, terms of service, international considerations, and more.
The Pennsylvania lawsuit is not the first instance where a state has sued the Respondent. The Commonwealth of Kentucky filed a complaint on January 8, 2026, against Character Technologies in Kentucky’s Franklin Circuit Court—alleging violations of the Kentucky Consumer Protection Act (Unfair, False, Misleading, or Deceptive Acts and Practices; Unfair Collection and Exploitation of Children’s Data) as well as Kentucky’s Consumer Data Protection Act, privacy, and unjust enrichment laws.
Kentucky contends that the Character.AI chatbot is defective by design and unsafe for children by “encourag[ing] suicide, self-injury, isolation, and psychological manipulation…expos[ing] minors to sexual conduct and/or exploitation, violence, drug, substance, and/or alcohol use, and other grave harms.” (For more on chatbot lawsuits, see prior EBG publications here and here.)
But as we analyzed here in January, practice of medicine is another area of regulatory concern. The regulation of medical practice generally falls under the jurisdiction of state medical boards, which are responsible for licensing competent professionals, establishing policies and guidelines, and administering disciplinary actions when appropriate. Discipline associated with the practice of medicine without a license is typically also the provenance of state medical boards or state attorneys general.
Some states’ legislatures have nevertheless passed laws to discipline vendors where an AI chatbot is engaged in the practice of medicine or is impersonating a human engaged in the practice of medicine.
State Chatbot Laws
To address the potential harms caused by chatbots, states continue to pass laws regulating AI chatbots; stakeholders should be aware that these laws are broad enough to encompass chatbots used in other contexts than companion AI.
As we analyzed here in August 2025, California passed AB 489, which regulates chatbots that impersonate licensed health care professionals or otherwise may mislead consumers about clinical credentials or capabilities. Other state’s laws further regulate transparency, bias and patient safety in the context of AI that may impact consumers’ health and wellness. Additional laws of note that may be implicated include, but are not limited to, the following:
Colorado
Colorado’s SB 24-205, Concerning Consumer Protections in Interactions with Artificial Intelligence Systems, is a comprehensive AI consumer protection law and goes into effect June 30, 2026 (see related EBG blog posts here and here; new legislation passed in May 2026 by the Colorado legislature, SB 189, could repeal and replace the law; the governor is expected to sign).
Utah
Utah’s Artificial Intelligence Policy Act, Utah Code Ann. § 13-2-12, took effect in May 2024 and is effective through July 1, 2027 (see related EBG blog post here). A violation of Utah’s statute carries an administrative fine of up to $2,500 per violation, and the state’s Division of Consumer Protection may bring an action in court to enforce the statute. The state attorney general may also bring a civil action on behalf of the Division.
Oregon
Oregon’s HB 2748, Relating to the Use of Nursing Titles, effective January 1, 2026, states that “a nonhuman entity, including but not limited to an agent powered by artificial intelligence, may not use any” professional nursing title.
Two states, Nebraska and Idaho, have passed Conversational AI Safety Acts. Both Acts require that, if a “reasonable person” were to believe that they are interacting with a human, an operator of the AI must “clearly and conspicuously” state that the participant in the conversation is AI. Both laws become effective July 1, 2027.
Key Takeaways
As AI chatbots become more sophisticated, and more harms are realized in the context of mental health, it is likely additional states will seek to create regulatory frameworks and exercise enforcement authorities including filing lawsuits.
AI developers and investors should follow these cases, emerging state laws, and related enforcement activities in this area. As chatbots increasingly engage consumers on bases that may encroach on otherwise licensed professional conduct, clinical professionals should familiarize themselves with these developments as well to understand the impacts on regulations around the practice of medicine and to ensure that their patients understand the risks of using such AI solutions.
The EBG Technology Team will continue to monitor and update developments in this area of the law as well as litigation and enforcement trends. EBG works with clients designing, developing, deploying, and using AI across the regulatory federal and state spectrum. For more information, please contact the authors of this post or any EBG attorney with whom you regularly work.
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.
Blog Editors
Authors
- Of Counsel
- Member of the Firm
- Associate