Following the December 11 Executive Order on AI policy, Ensuring a National Policy Framework for AI, Epstein Becker Green will provide further commentary on its impact for our clients.
Awaiting the Trump Administration’s Executive Order purporting to limit the state regulation of artificial intelligence (AI), we’ve had time to consider the issues at stake. Historically, states have served as innovators in the face of federal inaction, in areas as diverse as climate change and workplace violence in health care. But when the topic of innovation is artificial intelligence (AI), a tension exists, as some at the federal level have viewed the U.S. states more as meddling interlopers than as laboratories or leaders.
Ever since the proposed ten-year-moratorium on state regulation of AI was stripped from the final version of the One Big Beautiful Bill Act (OBBB), there have been rumblings that it will return. Shortly after the OBBB passed without the AI provisions, the White House released “America’s AI Action Plan” (Action Plan) on July 23, 2025. Building on Executive Order 14179 of Jan. 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence,” the Action Plan attempts to quash “burdensome” state regulation of AI—following the failure of the moratorium in early July.
To that end, the plan calls for the removal of “red tape and onerous regulation,” discouraging AI-related federal funding to states with burdensome AI regulations. Nevertheless, the Action Plan does instruct the federal government to avoid interfering with states’ rights to pass “prudent laws that are not unduly restrictive to innovation.” The plan thus calls for the identification, revision, or repeal of regulations, rules, guidance, and the like, that “unnecessarily hinder AI development or deployment” in the quest to accelerate AI innovation (emphasis added). How strict these interpretations might be remains to be seen.
A draft executive order that circulated in November—aiming to “[Eliminate] State Law Obstruction of National AI Policy”—was almost immediately put on hold. By early December, the preferred vehicle for banning state AI laws was the National Defense Authorization Act (NDAA), though the provision ultimately was not included. When that failed, the proposed Executive Order was back in the news.
And it is not just the White House. As reported by Privacy Daily, one Republican Representative who co-authored the moratorium was urging, in late October, clear state and federal “lanes in AI regulation” to prevent “50 different states going in 50 different directions.” Some industry groups, too, were still calling for the federal preemption of state AI laws.
On Aug. 13, 2025, the Department of Justice (DOJ) issued a sweeping Request for Information (RFI) (Docket No. OLP182) seeking “information pertaining to state laws, regulations, causes of action, policies, and practices (collectively, state laws) that adversely affect interstate commerce and business activities in other states.” This broad RFI raised concerns among state legislators from both parties on a number of fronts, including that the Trump administration was attempting to use the Commerce Clause to restrict states’ ability to regulate AI in violation of the Tenth Amendment.
In response, on Sept. 15, 2025, the National Conference of State Legislatures (NCSL) asked the DOJ to withdraw the request, citing Tenth Amendment concerns, among other things. What exactly are those concerns, and does the NCSL have reason to be worried? This article will explore the Tenth Amendment implications of a move toward a national AI governance policy and consider the consequences of federal limitations on states’ ability to regulate AI. The takeaway: the administration’s efforts to restrict state regulation of AI are just beginning.
The Tenth Amendment
The Tenth Amendment of the U.S. Constitution reserves to the states “all powers not delegated to the United States by the Constitution, nor prohibited by it to the States.” As such, states are empowered to promulgate regulations on a wide variety of issues, not only ensuring that regulations are responsive to the needs to the citizens of that state but also creating an important space for states to experiment with different regulatory regimes and learn from the experiences of other states.
The Supreme Court has routinely upheld states’ authority to promulgate their own regulations under myriad circumstances, establishing the contours of the Tenth Amendment in a trio of decisions. In a case involving the disposal of radioactive waste, the court held in 1992 that Congress may not affirmatively commandeer state regulatory processes by ordering states to enact or enforce a federal regulatory program. New York v. United States, 505 U.S. 144 (1992).
20 years later, in a decision related to the ACA’s Medicaid expansion clause, the court held that the distinction between permissible conditions and impermissible commandeering collapses when the state has no choice but to accept the conditions imposed by the federal government. National Federation of Independent Business v. Sebelius, 567 U.S. 519 (2012).
Then, in a case involving sports betting, the court in 2018 expanded this protection, holding that the anti-commandeering doctrine also prevents Congress from passing regulation prohibiting certain state action. Murphy v. NCAA, 584 U.S. 453 (2018). As Justice Samuel Alito noted in Murphy, both affirmative and negative commands equally intrude on state sovereign interests. Taken together, these cases reflect a deepening commitment by the court to the principles of federalism enshrined in the Tenth Amendment.
Viewed in the light of these decisions, the attempt to impose a blanket moratorium on state regulation of AI could well be at odds with the Tenth Amendment. Specifically, DOJ’s RFI, which invokes the Commerce Clause as an avenue for constraining state regulation of AI, may well run afoul of NFIB v. Sebelius’s holding that conditioning federal funds on the adoption of the federal government’s preferred approach to AI regulation violates the Tenth Amendment where states lack a real choice in whether or not to accept the condition.
But what harms does the Tenth Amendment guard against? Or put differently, what are the benefits of a robust federalism in which states are given wide latitude to experiment with legislative and regulatory responses to complex and common problems? And what would be lost if the moratorium were to be imposed and a uniform AI regulatory regime adopted nationwide?
The Benefits of Federalism
The clearest benefit of robust federalism is the ability for states to serve as laboratories for policy innovation. Different states—with their different economic, social, and political conditions—respond differently to similar challenges, and that diversity of responses encourages creative and innovative problem-solving. As discussed more fully below, already different states are focusing on different areas impacted by AI—from healthcare, to elections, to consumer safety. And where different states identify a common problem, they craft different legislative and regulatory solutions, which in turn are then tested in the real world, serving as models (or cautionary tales) for their neighboring states. A lengthy moratorium on states’ ability to regulate AI risks stifling the innovative spirit that animates state lawmaking, leaving the states moribund and unable to effectively address AI after the moratorium has been lifted, an outcome to be avoided.
A second clear benefit of robust federalism in our engagement with AI is that state legislatures and regulatory bodies are nimbler than their federal counterparts, and thus better able to respond quickly to new challenges posed by innovative technologies. For example, in the 2025 legislative session, state legislatures introduced an avalanche of bills—more than 135,500 overall, in all subjects including AI—of which 28 percent were enacted, compared with roughly 10,000 bills introduced by Congress over the same time period, of which only two percent were enacted. And the recent experience of the longest shutdown of the federal government in U.S. history highlights the need for nimble and responsive legislative bodies.
Counterargument
The counterargument against states’ rights over national interests in AI harkens back to the days of the desegregation dilemma and the right of federal government to intervene in states’ rights—and indeed, preempt them. This is most readily seen in Brown v Board of Education (1954) and its progeny.
The federal government’s authority to override state segregation laws primarily rested on the Supremacy Clause (Article VI, Clause 2) combined with the Fourteenth Amendment’s Equal Protection Clause. In Brown, the Supreme Court unanimously held that state-mandated school segregation violated the Equal Protection Clause, directly rejecting states’ claims that education fell within their traditional police powers under the Tenth Amendment. The court determined that when state laws conflict with federal constitutional guarantees, the Constitution’s supremacy prevails, regardless of traditional state authority over education. This was reinforced through subsequent cases like Cooper v. Aaron (1958), where the court emphasized that federal constitutional interpretations by the Supreme Court are binding on states and state officials cannot nullify them through claims of state sovereignty.
While states argued that the Tenth Amendment reserved education and public welfare matters to state control, the Supreme Court consistently held that no state power—whether characterized as traditional, police, or reserved—could justify violation of explicit constitutional protections. The court rejected the notion that federalism principles created a shield for states to maintain discriminatory practices, establishing that civil rights protections represent a floor below which no state may fall, regardless of local preferences or traditional state authority over particular policy domains.
With the arguments surrounding federalism in mind, we turn now to what states are doing.
State Regulatory Regimes
States have demonstrated a keen interest in regulating AI, with all 50 states, Puerto Rico, the Virgin Islands, and Washington D.C. introducing hundreds of bills collectively in 2025. In September, NCSL’s tracker reported that 38 states had enacted 143 AI-related laws. In November, the CATO Institute reported that states had considered more than 1,000 bills collectively. And the issue is bipartisan: while Democrats have introduced more legislation related to AI regulation than their Republican counterparts, in 2025 Republicans introduced more than 80 AI-related bills, according to the Brookings Institution. But what is the value in all this legislative effort at the state level? There are two principal benefits to permitting states to experiment with regulating AI.
First, AI is fast-developing technology, and states can respond to these developments more nimbly than the federal government. Freeing states to be laboratories for policy solutions to complex problems allows different states to experiment with different approaches and learn from the experience of their neighbors. For example, as the Brookings Institution report explains, while some states (such as Colorado) have explored a more comprehensive approach, others (such as California) are taking a piecemeal approach. And still other states (such as Kentucky and West Virginia) have chosen to focus on legislation that would create policy standards and best practices for the use of AI. See Kentucky SB4; West Virginia HB 3187.
Second, while it is challenging for any single entity to identify or anticipate the challenges posed by emerging technologies, with 50 different states—each with their unique concerns and priorities—addressing a common underlying issue, there is significantly more opportunity for creativity and innovation. This is borne out in the wide variety of AI-related issues on which different states are focused. Utah, for example, has been focused on regulations related to the use of consumer-facing generative AI, see Utah SB 332, SB 226, while Illinois has led the way on regulations regarding the use of AI by licensed health care professionals from using AI to make therapeutic decisions or generate treatment plans, see Illinois HB 1806.
Case Study: The State Regulation of AI Respecting Minors and/or Chatbots
As we wrote in September, preventing online harm to children and other vulnerable individuals may be one area where legislators on both sides of the aisle are willing to let others regulate. On June 22, 2025, Texas—which has voted Republican in the last ten presidential elections—enacted its Responsible AI Governance Act (TRAIGA), prohibiting the development or deployment of AI systems in a manner that intentionally aims to incite or encourage a person to (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity. While not aimed explicitly at minors or AI chatbots, the law’s suicide provisions have taken on a new significance as legal and regulatory activity in these areas increase. With seven chatbot lawsuits filed in California on Nov. 6 alone, as well as several others alleging teen suicides following AI chatbot interactions, any attempt at a future proposed federal moratorium on state AI laws could be an uphill battle where states including Texas have taken the lead.
Following a congressional hearing on Sept. 16, Sen. Josh Hawley (R-MO) introduced a federal S. 3062 on AI chatbots in October. Among other things, the GUARD ACT would make it “unlawful to design, develop, or make available” an AI chatbot, “knowing or with reckless disregard for the fact that the [AI] chatbot encourages, promotes, or coerces suicide, non-suicidal self-injury, or imminent physical or sexual violence.” In a possible nod to TRAIGA or the Action Plan, S. 3062 emphasizes, “Nothing in this Act or an amendment made by this Act, or any regulation promulgated thereunder, shall be construed to prohibit or otherwise affect the enforcement of any State law or regulation that is at least as protective of users of [AI] chatbots as this Act and the amendments made by this Act, and the regulations promulgated thereunder.”
Another (bipartisan) bill coauthored by Hawley and Sen. Richard J. Durbin (D-IL) attempts to balance developer liability for harm to a business or consumer with AI innovation. Noting that “multiple teenagers have tragically died after being exploited by an [AI chatbot],” the bill would “establish Federal legislative guidelines for products liability without implicating expressive speech to ensure more predictable legal outcomes for individuals and industries and promotes business innovation.” The bill declares, “This Act supersedes State law only where State law conflicts with the Provisions of this Act” and “Nothing in this Act shall prevent a State from enacting or enforcing protections that align with the principles of harm prevention, accountability, and transparency for a covered product that are stronger than such protections under the Act.”
Conclusion
As we move into 2026, the constitutional tension between federal oversight and state autonomy in AI regulation remains unresolved—leaving businesses, government entities, and individuals navigating an increasingly fragmented regulatory landscape. Are states better positioned to address the profound challenges posed by AI, working nimbly to fashion effective and creative solutions to as-yet unimagined challenges, or must the federal government step in to protect citizens’ fundamental constitutional rights, including privacy, right to assemble via social and electronic media, mental and physical health, and public safety and fairness? Does the global environmental impact of chip manufacturing—including the thirsty cooling towers that adversely affect water resource availability—raise concerns too big for any one state? And might disparate state laws interfere with national interests, particularly given the disruptions and dangers almost certain to come with AI?
With no comprehensive federal AI legislation on the immediate horizon and more than 160 new state AI laws taking effect across diverse domains—from employment algorithms to biometric surveillance to automated decision-making in health care and housing—the patchwork approach continues to intensify. The ongoing transition in federal agency leadership, and evolving interpretations of existing regulatory authority, create a regulatory environment where compliance strategies must account for both vertical conflicts between federal and state mandates and horizontal inconsistencies across state borders.
Until Congress enacts comprehensive federal AI legislation—or the courts definitively resolve the preemption questions raised by conflicting state and federal approaches—stakeholders must prepare for continued legal ambiguity at precisely the moment when AI systems are becoming embedded in critical infrastructure, essential services, and constitutional rights determinations. The federalism framework that successfully balanced national uniformity with state innovation in prior technological revolutions now faces its most complex test—one where the speed of AI development may outpace our constitutional system’s traditional mechanisms for resolving jurisdictional disputes.
If only AI itself could resolve these tensions...
Frances M. Green is counsel at Epstein Becker Green and a working group member of the AI Safety Institute Consortium of the National Institute of Standards and Technology (NIST); she holds an advanced legal degree in cybersecurity and data privacy and is a certified artificial intelligence governance professional by the International Association of Privacy Professionals (IAPP). Bronwyn C. Roantree is an associate in the firm's New York office. Ann W. Parks, an attorney with the firm, contributed to the preparation of this article.
Reprinted with permission from the December 12, 2025, edition of the New York Law Journal © 2025 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.