The rise of workplace wearable technology has opened new possibilities for employee efficiencies, safety, and health monitoring. Collecting health-related workplace data, however, may subject employers to liability under nondiscrimination laws.
Yesterday, the Equal Employment Opportunity Commission (“EEOC”) published a fact sheet addressing potential concerns and pitfalls employers may run into when gathering and making employment related decisions based on health-related information.
Understanding Workplace Wearables
Wearable technologies, or “wearables,” are digital devices worn on the body that can track movement, collect biometric data, and monitor location. Employers have implemented these tools for a multitude of reasons, including tracking and predicting how long certain tasks take employees to promote efficiency. Wearables may also be programmed to recognize signs of fatigue, like head or body slumps, and notice improper form when lifting, which can be critical for workplace health and safety.
[8/28/2025 UPDATE: Following a special session called by Governor Jared Polis, the Colorado legislature passed SB 25B-004 and it was signed by the governor on August 28, 2025. SB 25B-004 will delay the effective date for implementation of SB 24-205, the state’s historic artificial intelligence law, to June 30, 2026, instead of February 1, 2026.]
On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3, and the House of Representatives passed the bill on May 8.
In a letter to the Colorado General Assembly, Governor Polis noted that he signed the bill into law with reservations, hoping to further the conversation on artificial intelligence (AI) and urging lawmakers to “significantly improve” on the law before it takes effect.
SB 24-205 will become effective on February 1, 2026, making Colorado the first state in the nation to enact broad restrictions on private companies using AI. The measure aims to prevent algorithmic discrimination affecting “consequential decisions”—including employment-related decisions.
On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”). The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.
Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a ...
We’d like to recommend an upcoming complimentary webinar, “Addressing and Responding to Workplace Violence and Active Shooter Scenarios to Protect Your Employees” (Oct. 2, 2:00 p.m. EDT), by our Epstein Becker Green colleagues Kara M. Maciel, Susan Gross Sholinsky, and Christopher M. Locke, with Daniel Hess and Lynne Cripe of The KonTerra Group, an employee assistance program provider that regularly counsels employees undergoing stressful life events that can lead to violence.
Below is their description of the event:
Violence in the workplace can range from bullying and ...
Blog Editors
Recent Updates
- Video: New H-1B Visa Fee, EEOC Shutters Disparate Impact Cases, Key Labor Roles Confirmed - Employment Law This Week
- New $100,000 H-1B Fee Proclamation – Implications and Action Steps
- Video: FTC Backs Off Non-Compete Ban, Warns Health Care Employers - Employment Law This Week
- Artificial Intelligence and Disparate Impact Liability: How the EEOC’s End to Disparate Impact Claims Affects Workplace AI
- Reminder: Massachusetts Salary Range Disclosure Requirements Take Effect in October