Generative Artificial Intelligence (“AI”) tools like ChatGPT, Scribe, Jasper, and others have catapulted exponentially in popularity in recent years, for widespread personal and professional uses supplementing, if not largely displacing, traditional search engines. Applications for AI interactions in the workplace, algorithmically simulating human reasoning and inference, are expanding as quickly as users can draft new prompts requesting designs, how-to guides, correspondence, and countless other outputs. AI tools have quickly transitioned from an amusing new technology to essential tools for professionals and businesses, driving innovation and efficiency. These tools are used by businesses for an ever-expanding list of purposes, including brainstorming ideas based on patterns and data analysis; creating and memorializing documents, procedures, manuals, and tutorials; generating marketing and other client-facing materials; drafting communications; summarizing documents; explaining concepts and processes; and even generating code.
As these tools become more integrated into workplace processes, courts and litigants are beginning to confront the question of whether and to what extent AI searches and “chats” are discoverable in litigation. As the Federal Rules of Civil Procedure permit broad discovery regarding any nonprivileged matter that is relevant to any party's claim or defense and proportional to the needs of the case, litigants may potentially be entitled to compel production of information and communications generated or processed by AI platforms related to the facts in dispute. Fed. R. Civ. P. 26(b)(1); In re OpenAI, Inc., Copyright Infringement Litig., No. 23-CV-08292, 2025 WL 1652110, at *2 (S.D.N.Y. May 30, 2025). Just as local news headlines are replete with instances of internet searches as evidence in criminal cases[1], real-time AI “interactions” may likely be subject to the same disclosure requirements in civil litigation.
On July 25, 2025, the Eleventh Circuit issued an opinion in United States ex rel. Sedona Partners LLC v. Able Moving & Storage Inc. (No. 22-13340) addressing an important procedural question under the False Claims Act (FCA) and other fraud-based statutes: may a plaintiff rely on information learned during discovery to meet Rule 9(b)’s heightened pleading standard in an amended complaint? The court concluded that the answer is yes.
Rule 9(b) requires that allegations of fraud be plead “with particularity.” Defendants frequently rely on this standard at the motion-to-dismiss stage, aiming to defeat weak FCA complaints before discovery begins. In 2019, an unpublished Eleventh Circuit decision, Bingham v. HCA, Inc., 783 F. App'x 868 (11th Cir. 2019), suggested that plaintiffs could not use discovery to cure a deficient complaint. The concern was that such an approach could incentivize speculative suits filed without adequate factual grounding.
Biometric technologies—such as fingerprint scanners, facial recognition systems, and retina scans—are now commonplace in modern business operations. From employee timekeeping systems to facility security and customer-facing applications, these tools offer efficiency and convenience for many businesses. But these same conveniences have sparked backlash in the form of privacy litigation. In Illinois especially, companies are facing a surge of class-action lawsuits under the state’s Biometric Information Privacy Act (“BIPA”), a pioneering law that imposes strict requirements on the use of biometric data and hefty penalties for companies failing to adhere to the law. This trend is not confined to Illinois: a growing patchwork of similar laws in other states means that using biometrics without proper safeguards can expose companies nationwide to significant statutory damages and legal risks.
In the wake of the Dobbs decision, which eliminated the constitutional right to abortion, individual states were left to regulate or ban the procedure. A patchwork of state laws subsequently followed, with some states enacting total bans and others permitting abortion access, with considerable variations in between. In addition to regulating or restricting access to the procedure, certain states have criminalized seeking, providing, and helping others obtain or provide abortion, especially those providing telehealth services, but these actions are legal and protected in New York. New York’s “Shield Law” consists of several statutes, enacted and intended to protect providers and patients offering or seeking abortion in New York against the imposition of criminal and civil liability originating from outside the state. According to the New York State Office of the Attorney General, “[t]he Shield Law broadly prohibits law enforcement and other state officials from cooperating with investigations into reproductive health care (“protected health care”) so long as the care was lawfully provided in New York.”[1] Moreover, “[w]ith respect to reproductive health care specifically, these protections apply even if the care was provided via telehealth to a patient located out-of-state, so long as the provider was physically present in New York.”[2]
New York’s Shield Law creates substantive protections for reproductive health care, which can be summarized as follows:
Blog Editors
Recent Updates
- Can Silence Stop the Clock? How Secrecy May Allow Plaintiffs to Toll the Sherman Act’s Four-Year Statute of Limitations
- Discovery Pitfalls in the Age of AI
- Is the Deal Done? Litigation After Mergers and Acquisitions – Speaking of Litigation Video Podcast
- Eleventh Circuit Clarifies: Discovery Materials Can Be Used to Meet Rule 9(b)
- Biometric Backlash: The Rising Wave of Litigation Under BIPA and Beyond