The federal AI preemption fight is the most consequential technology policy battle in the United States right now, and most Americans have never heard of it. The Trump administration wants to strip states of the power to regulate artificial intelligence, replacing dozens of state laws with a single federal framework that critics say amounts to no regulation at all. States, backed by a bipartisan coalition of 36 attorneys general, are pushing back hard.
Here is what is happening, why it matters, and what comes next.
What Federal AI Preemption Means for You
Right now, states are the only governments actively protecting Americans from AI harms. Colorado has a law requiring companies to prevent algorithmic discrimination in hiring, housing, and healthcare decisions. California has enacted regulations governing how employers use AI in hiring and employment decisions. Illinois requires employers to tell workers when AI is being used to make decisions about their jobs. These laws took years of work to pass.
The White House wants to override all of them.
On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The order argues that state laws create a “patchwork of 50 different regulatory regimes” that threatens American AI dominance. It directs federal agencies to identify state AI laws deemed “onerous,” challenge them in court, and withhold federal funding from states that refuse to comply.
On March 20, 2026, the White House followed up with a four-page legislative framework asking Congress to broadly preempt state AI laws while opposing “open-ended liability” for AI companies. The framework proposes no new regulatory body, no new enforcement powers, and no federal protections to replace the state ones it wants to eliminate.
The Scorecard So Far
The administration has tried three times to preempt state AI laws through Congress. It has failed every time.
First, House Republicans inserted a ten-year moratorium on state AI regulation into the budget reconciliationA congressional procedure allowing certain legislation to pass the Senate with a simple majority instead of 60 votes, limited to measures with direct federal budgetary effects. bill, the “Big Beautiful Bill.” The moratorium would have withheld $500 million in broadband funding from states that enforced their own AI laws. The backlash was immediate and bipartisan. A coalition of 36 state attorneys general, from deep-red states like Idaho and Mississippi to blue states like California and New York, sent a joint letter urging Congress to reject the moratorium. Senator Marsha Blackburn, a Tennessee Republican who had initially worked on a compromise with Ted Cruz, ultimately turned against the provision, warning it “could allow Big Tech to continue to exploit kids, creators, and conservatives.” The Senate voted 99 to 1 to strip the moratorium.
Second, supporters tried to insert preemption language into the National Defense Authorization Act in late 2025. That effort also failed after divisions among Republicans and bipartisan opposition made clear it lacked the votes.
Third, the March 2026 framework asks Congress to pass preemption legislation. House Republican leaders immediately endorsed the proposal. But any such bill would need 60 votes in the Senate, which means Democratic support, and no Democrat has expressed support for broad preemption without a federal regulatory framework in place.
The Pressure Campaign
Unable to get Congress on board, the administration is using executive power to pressure states directly.
The executive order created a DOJ AI Litigation Task Force, led by Attorney General Pam Bondi, whose job is to sue states over their AI laws. The task force can challenge state laws on the grounds that they unconstitutionally burden interstate commerce or conflict with federal regulations. It consults with White House AI czar David Sacks on which laws to target.
The order also directed the Commerce Department to identify “onerous” state AI laws by March 11, 2026, and to withhold broadband infrastructure funding from states that have them. The administration has signaled it could leverage potentially billions of dollars in Broadband Equity, Access, and Deployment (BEAD) program funds against states.
Legal experts are skeptical this funding threat will survive court challenges. As the advocacy group Public Knowledge pointed out, Congress appropriated BEAD funds for broadband deployment, and “AI isn’t mentioned in the statute once.”
Why States Are Not Backing Down
The states opposing federal AI preemption are not just blue states worried about deregulation. The 36-attorney-general coalition includes Republicans from Idaho, Indiana, Kansas, Louisiana, Mississippi, Ohio, South Carolina, Tennessee, and Utah. Their argument is simple: someone needs to be protecting Americans from AI harms right now, and Congress is not doing it.
As the attorneys general wrote, states have already passed laws to protect against AI-generated deepfakesA synthetic image, video, or audio created using artificial intelligence to replace a person's likeness with someone else's, often making it difficult to distinguish from authentic content., prohibit deceptive practices targeting voters and consumers, safeguard renters from algorithmic rent-setting, and require disclosures when people are interacting with AI. Wiping those protections out without federal replacements would leave Americans with nothing.
Senator Ed Markey, who led opposition to the reconciliation moratorium, put it bluntly: “This 99-1 vote sent a clear message that Congress will not sell out our kids and local communities in order to pad the pockets of Big Tech billionaires.”
What Happens Next
The fight is far from over. Senator Ted Cruz has promised the moratorium “will return.” Senator Blackburn released her own comprehensive AI bill, the TRUMP AMERICA AI Act, on March 18, which includes some preemption provisions but also imposes new obligations on AI companies, including a duty of care and child safety requirements.
Meanwhile, state AI laws continue to take effect. Illinois’s AI disclosure law for employers went into effect January 1, 2026. California’s AI transparency and employment discrimination regulations are now active. Colorado’s comprehensive AI Act, delayed once already under industry pressure, is set for June 30, 2026.
The central question is whether Congress will pass federal AI legislation that actually protects Americans, or whether “preemption” will become a synonym for deregulation. As Brad Carson, president of Americans for Responsible Innovation, warned, the White House framework offers “another chance for tech companies to launch harmful products with no accountability.”
The federal AI preemption fight has become the defining regulatory battle of the 119th Congress. The Trump administration is pursuing a multi-vector strategy to displace state AI governance: executive orders directing agency action, a DOJ litigation task force, conditional federal funding, and legislative recommendations to Congress. After two failed legislative attempts and one executive order, the question is no longer whether the administration wants federal AI preemption, but whether it can achieve it without offering the substantive federal protections that both parties have demanded as a precondition.
The Architecture of Federal AI Preemption
The preemption campaign rests on an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” signed December 11, 2025. The order articulates the policy that “United States AI companies must be free to innovate without cumbersome regulation” and identifies three categories of objectionable state laws: those creating compliance burdens through regulatory fragmentation, those “requiring entities to embed ideological bias within models” (citing Colorado’s algorithmic discrimination law by name), and those impermissibly regulating beyond state borders.
The order activates four institutional mechanisms simultaneously:
- DOJ AI Litigation Task Force (Section 3): Announced January 9, 2026, by Attorney General Pam Bondi. Challenges state AI laws under the Dormant Commerce ClauseAn implied constitutional rule barring states from passing laws that unduly burden interstate commerce, even when Congress has not enacted legislation on the topic., federal preemptionThe legal principle by which federal law overrides conflicting state laws, preventing states from regulating in areas where Congress has claimed authority., and any other grounds the AG deems appropriate. Led by representatives from the Civil Division and Solicitor General’s office, with consulting authority for David Sacks.
- Commerce Department evaluation (Section 4): The Secretary of Commerce was directed to publish an evaluation of state AI laws by March 11, 2026, identifying “onerous” laws that conflict with federal policy and referring them to the Task Force.
- BEAD funding conditionality (Section 5): States with laws identified as “onerous” face ineligibility for non-deployment funds under the $42 billion BEAD program. Agencies are additionally directed to assess whether discretionary grants can be conditioned on states not enacting conflicting AI laws.
- Agency preemption actions (Sections 6-7): The FCC is directed to consider adopting a federal AI reporting standard that preempts state laws. The FTC is directed to issue a policy statement explaining when state laws requiring “alterations to the truthful outputs of AI models” are preempted by federal unfair and deceptive practices law.
On March 20, 2026, the White House released its legislative framework, a four-page document covering seven policy areas: child safety, community effects, copyright, government censorship, federal regulation, jobs, and state preemption. The framework calls for broad preemption, no new regulatory body, “industry-led” standards, regulatory sandboxes allowing companies to apply for exemptions from federal rules, and protections against “open-ended liability.” It explicitly proposes that states “should not regulate development or penalize AI developers for third-party use of their products.”
The Legislative Record: Three Failures
The administration’s legislative track record on preemption is 0 for 3.
Reconciliation (July 2025): The Senate Commerce Committee, chaired by Ted Cruz, inserted a ten-year moratorium on state AI regulation into the “Big Beautiful Bill.” The provision conditioned $500 million in new BEAD funding on states not enforcing AI laws. After the Senate parliamentarian ruled it satisfied the Byrd rule, Cruz and Blackburn negotiated a compromise reducing the moratorium to five years with carve-outs for “generally applicable” laws. But the compromise’s “undue or disproportionate burden” language generated uncertainty about which laws were actually protected. Blackburn withdrew support, calling the provision a vehicle for “Big Tech’s exploitation.” The Senate voted 99-1 to strip the moratorium.
NDAA (December 2025): Supporters attempted to insert preemption language into the FY2026 National Defense Authorization Act. The provision was dropped after divisions among Republicans, with House Majority Leader Steve Scalise acknowledging the defense bill “wasn’t the best place for this to fit.” The failure prompted Trump to sign the executive order days later.
Legislative framework (March 2026): The White House framework asks Congress to pass preemption legislation, but any standalone bill requires 60 Senate votes. No Democrat has endorsed broad preemption without paired federal protections, and key Republicans including Blackburn have conditioned support on child safety, creator protections, and conservative speech provisions.
The Opposition Coalition
The resistance to federal AI preemption is unusually broad and bipartisan.
Thirty-six state attorneys general, spanning from deep-red Idaho and Mississippi to blue California and New York, sent a joint letter to Congress on November 25, 2025, urging rejection of any federal moratorium. The letter emphasized that states have already passed laws addressing AI-generated deepfakesA synthetic image, video, or audio created using artificial intelligence to replace a person's likeness with someone else's, often making it difficult to distinguish from authentic content., algorithmic discrimination, consumer deception, and voter targeting.
The Center for American Progress identified structural problems with the moratorium approach: definitions of “AI system” written so broadly they could cover virtually any computational tool, enforcement mechanisms enabling private entities to bring actions against states, and no criminal law exemption in the Senate version, which would have blocked prosecution of AI-related crimes including deepfake child sexual abuse material.
The Center for Democracy and Technology’s CEO Alexandra Reeve Givens summarized the opposition position after the 99-1 vote: “If Congress isn’t prepared to step up to the plate, it shouldn’t prevent states from addressing the challenge.”
Federal AI Preemption and the Blackburn Paradox
Senator Blackburn’s TRUMP AMERICA AI Act, introduced as a discussion draft on March 18, 2026, reveals a central tension in the preemption debate. The bill’s formal title, “The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act,” signals alignmentIn AI safety, the process of ensuring an AI system's goals and behaviors match human values and intentions. Poor alignment can cause AI systems to optimize for measurable metrics in ways that contradict human interests. with the executive order’s deregulatory framing. But as Jones Walker’s analysis noted, the bill’s “most striking feature is the gap between its stated deregulatory purpose and its actual regulatory density.”
The bill imposes mandatory duty-of-care obligations with FTC rulemaking authority, multiple overlapping liability theories enabling federal, state, and private enforcement, required participation in DOE evaluation programs for frontier AI, ongoing bias audits for high-risk systems, detailed transparency reporting, and platform design obligations. It also narrows Section 230 immunity. This is not a light-touch framework. Organizations that have invested in state law compliance, particularly in Colorado, California, and New York, “would face the prospect of replacing one compliance regime with another that may be equally or more demanding.”
The political challenge is that preemption without regulation cannot pass the Senate, but preemption with regulation may not satisfy the industry interests or the White House vision of “minimally burdensome” governance.
Legal Vulnerabilities
The executive order’s enforcement mechanisms face significant legal headwinds. The BEAD funding conditionality is the most vulnerable: Congress appropriated BEAD funds for broadband deployment, and AI is not mentioned in the statute. Courts would likely side with states contesting the withholding of broadband funds on AI governance grounds, as the National Digital Inclusion Alliance has already filed suit over the administration’s unilateral redirection of related broadband funds.
The FCC preemption route is equally constrained. The FCC has no substantive statutory authority over AI, and the current commission has not even asserted its existing Title II authority over internet service. Any attempt to stretch FCC jurisdiction to preempt state AI laws would face immediate legal challenge.
The DOJ Task Force’s Dormant Commerce Clause arguments present a stronger but still uncertain path. Courts have historically been reluctant to strike down state laws addressing local harms under the Commerce Clause, and states can argue their AI laws address discrimination, consumer protection, and public safety within their borders.
The State of Play
While the federal government debates preemption, state AI laws continue to multiply and take effect. Illinois’s AI disclosure law for employers took effect January 1, 2026. California’s AI transparency laws and FEHA automated decision-making regulations are now active. Colorado’s comprehensive AI Act, delayed to June 30, 2026, after industry opposition and federal pressure, remains on track.
The emerging pattern resembles early internet regulation, data privacy, and environmental law: states act first, Congress debates, and the question of federal preemption becomes a proxy warA conflict where two opposing powers use third parties to fight on their behalf, avoiding direct military confrontation while advancing their strategic interests. over regulatory philosophy. The key difference is speed. As the Center for American Progress observed, ten years ago there were no transformer models and no large language modelsA machine learning system trained on vast amounts of text that predicts and generates human language. These systems like GPT and Claude exhibit surprising capabilities but also make confident errors.. AI industry leaders are now projecting that artificial general intelligenceAI systems with capabilities equivalent to human-level intelligence across all domains. Currently theoretical; existing systems excel in narrow tasks but lack general adaptability. could arrive within the very decade a moratorium would have shut states out of legislating.
The path forward likely requires what Public Knowledge has called “1:1 preemption”: for every state law preempted, a substantive federal protection must replace it. Whether the 119th Congress can negotiate that deal, with an industry pushing for deregulation, an administration wielding executive power, and states refusing to cede authority, will determine whether Americans get governed AI or ungoverned AI.



