The Cruz v. Fireflies.AI case warns of legal risks from AI meeting tools that record voices without proper notice or consent, violating Illinois privacy laws. As AI use grows, organizations face challenges like data breaches, accuracy issues, and legal violations. To protect themselves, they should ensure transparency, get clear consent, conduct vendor checks, and implement strong policies. The case emphasizes the need to prioritize privacy as AI technology advances.
Data privacy
A recent class action lawsuit—Cruz v. Fireflies.AI Corp.—underscores the legal risks associated with one of the newer artificial intelligence (AI) tools promised to ease the demands of an executive’s busy day—AI meeting assistants.
The complaint alleges that an AI meeting assistant tool developed by Fireflies.AI records, analyzes, transcribes, and stores meeting participants’ voices—including those of individuals unaware of the tool— in violation of the requirements under the Illinois’ Biometric Information Privacy Act (BIPA) for notice, written consent, and retention. Plaintiff Katelin Cruz (Cruz), an Illinois resident whose voice was recorded by Fireflies.AI during a meeting of a nonprofit group, contends that the tool’s “speaker recognition” functionality creates and retains voiceprints, which implicates BIPA compliance. Further, Cruz alleges the tool lacks both a publicly available retention policy and adequate disclosure practices regarding biometric collection.
This litigation previews the regulatory scrutiny AI transcription and summarization tools will likely encounter, especially considering that the implementation of these tools will only increase in the near future. At J.P. Morgan’s Healthcare Conference in January 2026, for example, “nearly all providers... touted widespread adoption of ambient listening software that has reduced clinician charting time and improved coding accuracy.”1 Critically, these providers and AI developers need to be aware of the risks, including how the tool intersects with the requirements under applicable biometric privacy laws.
No Consent?
In her 11-page complaint—filed on Dec. 18, 2025, in the U.S. District Court for the Central District of Illinois—Cruz contends that she participated in a virtual meeting one month earlier, on November 18, during which a Fireflies.AI meeting assistant was used. Although no platform was specified in the lawsuit, the complaint notes that Fireflies.AI “automatically joins virtual meetings conducted on platforms such as Zoom, Microsoft Teams, and Google Meet.”
“When Fireflies is enabled by a meeting host, it records, analyzes, transcribes, and stores the voiceprints of all meeting participants, including individuals who never created Fireflies accounts, never agreed to Fireflies’ Terms of Service, and never executed any written consent authorizing biometric data collection,” Cruz alleges.
The tool has a “Speaker Recognition” function that, according to the company website, “identifies different speakers in meetings and audio files.” These voiceprints, the complaint argues, qualify as biometric data under BIPA—which requires, under Section 15(b), written notice to the subject or the subject’s legally authorized representative that a biometric identifier or biometric information is being collected or stored, stating the specific purpose and length of use, plus the receipt of written consent.
A private entity in possession of biometric identifiers or biometric information must also, under Section 15(d), develop a written policy establishing a retention schedule and guidelines for permanently destroying the identifiers or information when the initial purpose for collecting has been satisfied—or within three years of the individual’s last interaction with the private entity, whichever comes first. Cruz’s complaint alleges violations of both Section 15(b) and (d).Transparency and Consent Imperative
At the heart of the lawsuit against Fireflies.AI lies a fundamental principle: individuals have a right to know when their biometric data is being collected and to meaningfully consent to that collection. Transparency is not merely a best practice—it’s increasingly a legal mandate. BIPA and similar statutes—including Texas’s Capture and Use of Biometric Identifiers, Washington’s Biometric Privacy Law, and California’s CRPA provisions—recognize that biometric identifiers like voiceprints carry unique privacy risks because they cannot be changed if compromised, unlike passwords or account numbers. Cruz’s complaint, as well as the Illinois law, points out that voiceprints are increasingly used to authenticate identities when individuals seek access to restricted personal or financial information, heightening the risks.
Transparency requires more than burying disclosures of utilization often found in Terms of Service. As noted below, transparency is not the legal equivalent of consent.
Effective notice must be:
- Timely: provided before or at the moment of collection;
- Clear: written in plain language that explains what biometric data is being captured and how;
- Specific: detailing the purposes for collection, who will have access, and how long data will be retained; and
- Accessible: reaching all participants, not just the account holder who deployed the tool.
Consent, in turn, must be informed and voluntary. When AI tools begin recording meetings, participants may feel pressured to just go along and say nothing rather than object and disrupt the session. This dynamic raises questions about whether consent is truly voluntary, particularly in workplace contexts where power imbalances exist. Organizations deploying these tools bear responsibility for ensuring that all participants—employees, clients, contractors, and guests—understand what they are consenting to and have a genuine opportunity to decline without penalty.
Other Jurisdictions and Contexts
The consent challenge becomes more complex in multi-jurisdictional contexts. California’s Invasion of Privacy law (Cal. Penal Code §631) for example, requires the consent of all parties before recording. Connecticut’s law (Conn. Gen. Stat. §52-570(d)) requires the consent of all parties for telephonic conversations. Criminal wiretapping includes the intentional recording of a telephonic communication by a person other than the sender or receiver, without the consent of either the sender or receiver (Conn Gen. Stat. §§53a-187, 89). The European Union’s General Data Protection Regulation (GDPR) requirements, as well as various U.S. state biometric privacy statutes, create a patchwork of obligations. What satisfies consent requirements in one jurisdiction may fall short in another, making it essential to design disclosure and consent mechanisms that meet the highest applicable standard.
A number of state laws already deal specifically with biometric data. The Illinois BIPA, for example, defines a “biometric identifier” as a retina or iris scan, fingerprint, voiceprint, or hand or face geometry. Biometric identifiers do not include writing samples, written signatures, photographs, human biological samples used for valid scientific testing or screening, demographic data, tattoo descriptions, physical descriptions such as height, weight, hair color, or eye color, donated body parts, or a number of other biological materials or medical information as described in the act. Other states legislating in this area include:
California. California’s Privacy Rights Act (CRPA) (Ca. Civil §1798.100 to §1798.199.100)—an amendment to the California Consumer Protection Act (CCPA)—classifies biometric information, including voiceprints, as sensitive personal information requiring heightened protections and opt-out rights.
Colorado. Under the Colorado Privacy Act (CPA) (Colo. Rev. Stat. §§6-1-1301 to 6-1-1314), covered entities must obtain express consumer consent before processing the biometric data and clearly disclose the purpose(s) for which the data is collected and used. Collection must be limited to what is adequate, relevant, and reasonably limited for the stated purpose. Businesses are prohibited from using biometric data in ways that are incompatible with those disclosures unless they obtain new consent. Clear and conspicuous notice to consumers is required before or at the time of collection or processing.
Further, under the CPA’s Amendments, collectors of biometric data are required to develop a written policy setting forth the retention schedule for the data, a protocol for responding to data security incidents, and guidelines for data deletion.
Montana. Montana’s Consumer Data Privacy Act (Mont. Code Ann. §30-14-2701) includes biometric data as sensitive personal information subject to specific consent requirements.
New York. Although Governor Kathy Hochul, in December, vetoed a state health information privacy bill that would have regulated consumer health data not covered by the federal Health Insurance Portability and Accountability Act (HIPAA), she signed AI safety and transparency legislation in December—the Responsible AI Safety and Education (RAISE) Act, requiring large AI developers to create and publish information about their safety protocols, and report incidents to the state within 72 hours of determining that an incident occurred. Hochul further announced onJan.13, 2026, the future creation of a first-of-its kind Office of Digital Innovation, Governance, Integrity & Trust, to protect New Yorkers’ private data, protect against harmful AI, and more. Texas. Texas’s Capture or Use of Biometric Identifier Act (CUBI) (Tex. Bus. & Com. Code§503.001) requires consent before capturing biometric identifiers for commercial purposes, though it lacks BIPA’s private right of action.
Washington. Washington’s Biometric Privacy Law (Wash. Rev. Code §19.375) prohibits enrollment in biometric databases without consent and mandates notice about biometric data collection purposes and storage duration.
Different state laws also span different industries when addressing AI. For example, a clinic patient filed a class action complaint in California Superior Court, San Diego County, in November2025, alleging that a medical group systematically used an ambient AI clinical documentation tool to secretly record confidential doctor-patient communications before transmitting the recordings to a third-party vendor for processing. Defendants also allegedly created false records stating that patients “were advised” and “consented” to the recordings.
Allegations by Jose A. Saucedo include state law violations of California’s Invasion of Privacy Act (Cal. Penal Code §§632 and 637.2) and Confidentiality of Medical Information Act (Cal. Civ. Code§56 et seq.) Section 632(a) requires the consent of all parties before using a recording device to eavesdrop upon or record a confidential communication.
Why It Matters: Operational and Legal Risk Landscape
AI note-takers promise productivity gains but introduce substantial risks:
Expanded discovery obligations: Recordings and AI-generated summaries become discoverable materials, potentially revealing more than handwritten notes would capture and creating larger volumes of data to review in litigation.
Privileged information at risk: Sensitive discussions—attorney-client communications, trade secrets, personnel matters—may be inadvertently preserved and shared through automated transcription and summarization features, potentially waiving privileges or violating confidentiality obligations.
Accuracy and attribution concerns: AI can misidentify speakers, misinterpret context, or oversimplify nuanced discussions. These errors create misleading records that undermine the reliability of meeting documentation and potentially expose organizations to disputes about what was actually said.
Retention and legal hold conflicts: Organizations must balance routine data deletion policies against legal hold obligations. If AI tools automatically retain transcripts and recordings without clear retention schedules, organizations risk either preserving excessive data or inadvertently destroying evidence subject to litigation holds.
Third-party usage concerns: When AI tool providers use recordings for their own purposes—such as training algorithms or improving services—without proper notice and consent, they may violate wiretapping laws; state statutes such as the CCPA, and other privacy laws. Organizations must understand not only their own obligations but also how their vendors handle collected data (see Epstein Becker Green’s checklist of “5 Critical Issues in Health Care Vendor Agreements”).
Putting It Into Practice
Deploying AI transcription tools responsibly requires deliberate governance:
Vendor due diligence: Examine vendors’ data handling practices, security measures, and compliance with applicable biometric privacy laws. Understand whether and how they use meeting data for their own purposes.
Meeting-Appropriate Deployment: Establish clear guidelines about when AI note-takers should and should not be used. Sensitive meetings involving privileged communications, confidential business strategy, or personnel matters may warrant excluding these tools entirely.
Layered Notice and Active Consent: Move beyond generic privacy policies. Provide explicit notice at the meeting’s outset, clearly identifying that AI recording and speaker recognition are active, and secure affirmative consent from all participants. Consider whether oral consent is sufficient or whether your risk profile requires documented written consent.
Human Oversight: Where accuracy is critical or content is sensitive, implement human review of AI-generated summaries and transcripts before distribution. Consider restricting access to recordings and transcripts to individuals with legitimate need.
Retention Governance: Establish clear retention periods that align with legal obligations and business needs. Ensure retention practices comply with litigation hold requirements and that routine deletion processes have appropriate safeguards.
Policy and Training: Address AI note-takers explicitly in your organizational AI policy. Train employees on when and how to use these tools appropriately, including how to provide effective notice and obtain consent, and when to refrain from deployment altogether.
The Cruz litigation reminds us that convenience cannot override fundamental privacy rights. As AI capabilities expand, so too must organizational commitment to transparency, meaningful consent, and thoughtful governance. The attorneys at Epstein Becker Green are happy to answer questions you might have on biometric identifiers and privacy protection. We will be following the Cruz litigation and will be keeping clients and our readers posted as the litigation proceeds.
* * * *
Frances M. Green, Counsel at Epstein Becker Green, is a trial attorney and advisor to global clients on intersection of AI, cybersecurity and data privacy law. Naomi C. Friedman is an Associate at Epstein Becker Green. Ann W. Parks, staff attorney with the firm, contributed to the preparation of this article.
Opinions are the authors’ own and not necessarily those of their employers.
Reprinted with permission from the February 11th, 2026 edition of the New York Law Journal © 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.