The discussion of Artificial Intelligence (“AI”) in the workplace typically focuses on whether the AI tool and model has a discriminatory impact. This means examining whether the AI output creates an unlawful disparate impact against individuals belonging to a protected category.
However, that discussion rarely centers on the types of training data used, and whether the training data itself could have a harmful effect on the workers tasked with training the AI model.
It has been four years since Congress enacted the Eliminating Kickbacks in Recovery Act (“EKRA”), codified at 18 U.S.C. § 220. EKRA initially targeted patient brokering and kickback schemes within the addiction treatment and recovery spaces. However, since EKRA was expansively drafted to also apply to clinical laboratories (it applies to improper referrals for any “service”, regardless of the payor), public as well as private insurance plans and even self-pay patients fall within the reach of the statute.
Creative and aggressive plaintiffs’ lawyers are forever on the hunt for new theories under which to bring potentially lucrative class action lawsuits utilizing plaintiff-friendly state consumer protection statutes (with California being the most favored forum). The dietary supplement industry has been in the plaintiffs bar’s cross-hairs for more than a decade now. As the case law has evolved and developed, supplement companies have had notable success fighting these suits. Just last week, Judge Miller in the Southern District of California tossed a proposed class action ...
Blog Editors
Recent Updates
- Can Silence Stop the Clock? How Secrecy May Allow Plaintiffs to Toll the Sherman Act’s Four-Year Statute of Limitations
- Discovery Pitfalls in the Age of AI
- Is the Deal Done? Litigation After Mergers and Acquisitions – Speaking of Litigation Video Podcast
- Eleventh Circuit Clarifies: Discovery Materials Can Be Used to Meet Rule 9(b)
- Biometric Backlash: The Rising Wave of Litigation Under BIPA and Beyond