On March 4, 2026, Nippon Life Insurance Company of America (“Nippon Life”) filed suit against OpenAI Foundation and OpenAI Group PBC in U.S. District Court for the Northern District of Illinois—claiming that a covered employee’s zealous use of the artificial intelligence (“AI”) tool, ChatGPT, for pro se litigation caused the chatbot to engage in tortious interference with a contract, abuse of process, and the unlicensed practice of law.
Plaintiff Nippon Life is a subsidiary of Japan’s Nippon Life Insurance Co., providing long-term disability insurance products in the United States; defendant OpenAI Foundation and OpenAI Group PBC are the entities responsible for developing and operating the ChatGPT AI platform. We discuss the case and some of the issues raised by the lawsuit below.
Factual Background
The Underlying Disability Claim
According to Nippon Life’s complaint, Graciela Dela Torre, who suffered from carpal tunnel syndrome and epicondylitis, submitted a long-term disability (LTD) benefits claim to her employer, a Tokyo-based global logistics company insured by Nippon Life. Dela Torre’s LTD benefits were terminated in November 2021 and she brought suit against Nippon Life in December 2022. The parties reached a settlement in January 2024, under which Dela Torre signed a release, waiving any future claims against Nippon Life. Pursuant to the settlement, Dela Torre’s claims were ultimately dismissed with prejudice by the court.
The ChatGPT Intervention
One year after the settlement, Dela Torre allegedly became dissatisfied with the settlement, believing it may have resulted from “potential errors or omissions of important facts and documentation.” She contacted her former attorney, who responded that there were no errors and reminded her that the signed release precluded her from reopening the case.
Despite her attorney’s response, Dela Torre uploaded their correspondence to ChatGPT and “asked whether she was being gaslighted.” The complaint alleges that ChatGPT responded affirmatively, concluding that her attorneys’ communications “invalidated” her feelings, “dismissed her perspective, and deflected responsibility for her dissatisfaction.” Dela Torre subsequently fired her attorneys, turned to ChatGPT as her de facto legal advisor, and prepared to reenter the court system as a pro se litigant.
In response to Dela Torre’s prompts, ChatGPT allegedly generated legal arguments including that her former counsel had inappropriately pressured her into signing a blank signature page. Dela Torre then filed a motion, drafted by ChatGPT, to reopen her case—despite, Nippon Life contends, the chatbot being “aware of the settlement Agreement between the parties.”
The Cascade of AI-Generated Litigation
On February 13, 2025, the U.S. District Court for the Northern District of Illinois denied Dela Torre’s motion, ruling that the case could not be reopened. One day earlier, Dela Torre had filed a new lawsuit against another insurer, Dela Torre v. Davies Life & Health et al, 1:25-cv-01483, later amending the complaint to add Nippon Life as a defendant in that case and reasserting the same claims.
Across both proceedings, the complaint alleges that Dela Torre filed twenty-one motions, one subpoena, and eight notices and statements—all created using ChatGPT. In total, Nippon Life attributes at least 44 filings to ChatGPT’s assistance. Among these was a citation to a fabricated case, “Carr v. Gateway, Inc.,” which the complaint states “only exists in Dela Torre’s papers and the ‘mind of ChatGPT.’” The complaint characterizes Dela Torre’s conduct as driven by “sustained animosity rather than any objective legal purpose.”
Causes of Action
The March 4 complaint by Nippon Life asserts three causes of action against OpenAI:
Count I: Tortious Interference with Contract.
Nippon Life alleges that OpenAI, through ChatGPT, intentionally interfered with the binding settlement agreement between Nippon Life and Dela Torre by encouraging her to breach its terms, pursue the reopening of a dismissed case, and file a new lawsuit reasserting the same claims. The complaint argues that OpenAI’s system actively undermined an enforceable contract by advising Dela Torre that her attorney’s (correct) advice regarding the settlement was meant to gaslight her. .
Count II: Abuse of Process
Nippon Life contends that ChatGPT’s generation of dozens of meritless court filings constitutes an abuse of the judicial process. The complaint emphasizes the volume (44+ filings) and the assertion that none of the motions served a “legitimate legal or procedural purpose.” Importantly, this claim does not require the court to determine whether an AI bot can “practice law”—only that OpenAI’s system foreseeably produced meritless filings that harmed a third party.
Count III: Unauthorized Practice of Law (“UPL”)
This is the most novel and closely watched claim. Nippon Life alleges that OpenAI violated Illinois statutes governing the unauthorized practice of law. The complaint states pointedly that “ChatGPT is not an attorney” and that despite OpenAI’s widely publicized demonstrations of ChatGPT passing the Uniform Bar Examination with a combined score of 297, the platform “has not been admitted to practice law in the State of Illinois or in any other jurisdiction within the United States.”
Relief Sought
Nippon Life seeks the following relief:
- $300,000 in compensatory damages for actual losses including attorneys’ fees and costs incurred defending against the AI-generated litigation.
- $10 million in punitive damages to deter similar conduct in the future.
- A declaratory judgment that OpenAI violated Illinois laws governing the unauthorized practice of law.
- A permanent injunction barring OpenAI from providing legal advice to Dela Torre and from otherwise engaging in the practice of law in the State of Illinois.
Key Evidentiary and Strategic Points
The October 2024 Policy Update as Sword, Not Shield
OpenAI revised its usage policies in October 2024 to prohibit users from relying on ChatGPT for legal advice. Nippon Life wields this not as evidence of a defense but as evidence that OpenAI recognized the foreseeable risk and responded with a behavioral patch—a terms-of-service disclaimer—rather than implementing architectural safeguards in the system itself.
The Bar Exam Marketing Claim
The complaint identifies OpenAI’s marketing of ChatGPT’s bar exam performance as a direct contributor to Dela Torre’s belief that the system could function as her lawyer. This frames the bar exam score not as a demonstration of competence but as a capability assertion that invited reliance—without the design architecture that would have made that reliance safe.
Hallucinated Case Law
The fabricated citation to “Carr v. Gateway, Inc.” reinforces Nippon Life’s position that ChatGPT was not merely providing information but was actively constructing legal arguments and authority—a function traditionally reserved for licensed attorneys.
Defendant Not Named: Dela Torre
Notably, Dela Torre is not named as a defendant. The complaint focuses liability squarely on OpenAI as the developer and operator of the tool, not on the individual user who relied on it.
OpenAI’s Initial Response
OpenAI has stated to Law360 that the complaint “lacks any merit whatsoever.” As of the date of this synopsis, no formal legal team has entered an appearance on behalf of the defendants.
Takeaways
This complaint presents a novel question that is poised to shape AI governance. When a chatbot crosses a line from providing comparative information that is probable but not necessarily factual, to a tailored legal conclusion about a specific user’s specific legal situation, should there be liability? At what point does an AI system’s output cease to be general information and become the equivalent of the practice of a licensed professional?
UPL rules serve two purposes that can be distilled into a single principle: protect the public and the integrity of the legal system from the incompetence of non-lawyers. Can an argument be made that ChatGPT crossed the line when it told Dela Torre that her attorney’s advice was wrong? Dela Torre certainly acted in a manner that demonstrated her trust in ChatGPT’s legal advice. The AI tool’s statements were taken by Dela Torre to include legal conclusions about a binding contract, the conduct of her attorneys, and the mechanisms of our judicial system. Indeed, ChatGPT was to Dela Torre an accessible, responsive, and authoritative advisor that convincingly mimicked an attorney, without any professional boundaries or design constraints that would have prevented the harm of incompetent legal representation.
The Northern District of Illinois may be the first federal court to draw this line. The answer will reverberate across every industry in which AI tools interact with regulatory and professional licensing frameworks.
We will be following this case closely and will provide updates as they arise.
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.
Blog Editors
Authors
- Of Counsel
- Associate