Generative Artificial Intelligence (“AI”) tools like ChatGPT, Scribe, Jasper, and others have catapulted exponentially in popularity in recent years, for widespread personal and professional uses supplementing, if not largely displacing, traditional search engines. Applications for AI interactions in the workplace, algorithmically simulating human reasoning and inference, are expanding as quickly as users can draft new prompts requesting designs, how-to guides, correspondence, and countless other outputs. AI tools have quickly transitioned from an amusing new technology to essential tools for professionals and businesses, driving innovation and efficiency. These tools are used by businesses for an ever-expanding list of purposes, including brainstorming ideas based on patterns and data analysis; creating and memorializing documents, procedures, manuals, and tutorials; generating marketing and other client-facing materials; drafting communications; summarizing documents; explaining concepts and processes; and even generating code.
As these tools become more integrated into workplace processes, courts and litigants are beginning to confront the question of whether and to what extent AI searches and “chats” are discoverable in litigation. As the Federal Rules of Civil Procedure permit broad discovery regarding any nonprivileged matter that is relevant to any party's claim or defense and proportional to the needs of the case, litigants may potentially be entitled to compel production of information and communications generated or processed by AI platforms related to the facts in dispute. Fed. R. Civ. P. 26(b)(1); In re OpenAI, Inc., Copyright Infringement Litig., No. 23-CV-08292, 2025 WL 1652110, at *2 (S.D.N.Y. May 30, 2025). Just as local news headlines are replete with instances of internet searches as evidence in criminal cases[1], real-time AI “interactions” may likely be subject to the same disclosure requirements in civil litigation.
The discussion of Artificial Intelligence (“AI”) in the workplace typically focuses on whether the AI tool and model has a discriminatory impact. This means examining whether the AI output creates an unlawful disparate impact against individuals belonging to a protected category.
However, that discussion rarely centers on the types of training data used, and whether the training data itself could have a harmful effect on the workers tasked with training the AI model.
Blog Editors
Recent Updates
- State AGs in Action: Health Care Enforcement in 2026 – Speaking of Litigation Video Podcast
- The DOJ’s New Corporate Enforcement Policy: A Familiar but Now Nationally Unified Framework for Voluntary Self-Disclosure
- The Case Was Settled, but ChatGPT Thought Otherwise: A Dispute Poised to Define AI Legal Liability
- “Claude Is Not an Attorney”: Individuals Risk Abandoning the Attorney-Client Privilege and Attorney Work-Product Doctrine When Consulting AI
- Prediction Markets v. State Gaming Laws: The Kalshi Litigation Gamble