Artificial Intelligence

Trump Executive Order Targets State AI Laws

Published: Dec. 18, 2025

Highlights

  • The White House’s Executive Order launches a coordinated effort to challenge state AI laws through DOJ litigation, conditional federal funding, and agency preemption claims; but state AI laws remain enforceable absent court action or Congress preemption
  • The Commerce Department will publish a list by March 2026 identifying state laws deemed onerous; likely targets include bias testing, impact assessments, and transparency requirements for AI systems
  • Child safety protections are explicitly carved out from federal preemption efforts
  • The Executive Order characterizes state laws against algorithmic discrimination as forcing alterations to “truthful outputs” and a deceptive practice under Section 5 – a notable reversal of prior FTC guidance positioning algorithmic bias as a liability risk

On December 11, 2025, President Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” taking aim at the growing number of state AI regulations. The order frames state-level AI laws as cumbersome barriers to innovation.

The Executive Order (EO) is explicit about its goal of preventing “a patchwork of 50 different regulatory regimes” and instead promoting a “minimally burdensome” single national standard. It represents the administration’s most direct effort yet to check state regulatory authority over AI development and deployment.

What This Executive Order Does (and Doesn’t Do)

Critically, this EO cannot, by itself, invalidate any state AI law. Preemption generally requires an act of Congress or valid federal regulation, and courts ultimately determine whether a state law is unconstitutional or preempted. The President, however, lacks authority to override state laws by decree.

Instead, it launches a coordinated federal strategy – DOJ litigation, federal funding leverage, and agency actions – aimed at narrowing or displacing state AI requirements over the coming months. Subject to anticipated challenges, these measures combined could place significant pressure on states and create pathways to minimize state AI laws.

For now, however, state obligations remain enforceable and companies should plan for continued multi-state compliance while tracking fast-moving federal developments.

AI Litigation Task Force (Section 3)

Within 30 days, the EO directs the Attorney General to establish an “AI Litigation Task Force” dedicated solely to challenging state AI laws deemed inconsistent with the order’s policy of sustaining and enhancing US global AI dominance through a minimally burdensome compliance framework. The task force is directed to file lawsuits arguing that such state AI laws unconstitutionally regulate interstate commerce, are preempted by existing federal regulations, or are otherwise unlawful (potentially including First Amendment challenges). 

Evaluation of State AI Laws (Section 4)

Within 90 days, the EO directs the Secretary of Commerce to identify and publish a list of existing state AI laws that are “onerous” and conflict with the order’s policy, including laws that should be referred to the Task Force for potential legal challenge.

It specifically requires identification of laws that “require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution.”

Federal Funding Restrictions (Section 5)

Within 90 days, the EO directs the Secretary of Commerce to issue a policy notice making states with onerous AI laws identified under Section 4 ineligible for non-deployment funds under the Broadband Equity Access and Deployment (BEAD) program, to the extent allowed under federal law. This use of BEAD funding as leverage is likely to face legal challenges from states under Spending Clause principles.

The order also directs all federal agencies to review their discretionary grant programs and consider similar conditions tying funding to non-enforcement of state AI laws that conflict with the policy of this order. 

Agency Actions (Sections 6-7)

Within 90 days, the EO directs the FTC to issue a policy statement explaining when state laws requiring “alterations to the truthful outputs” of AI models are preempted by the FTC Act’s prohibition on deceptive acts or practices. The apparent theory is conflict preemption. If a state law requires conduct the federal government characterizes as “deceptive,” that state requirement could be argued to conflict with federal policy under Section 5. Whether that theory succeeds will depend on the FTC’s reasoning, statutory authority, and how courts treat the asserted conflict.

The EO directs the FCC to consider adopting a federal AI reporting and disclosure standard that would preempt conflicting state laws within 90 days of the Commerce Department’s publication of its state law evaluation.

The EO’s directives to the FTC and FCC do not substitute for these agencies’ statutory authority or administrative process requirements. Any attempt to preempt state requirements via agency action will turn on the scope of the agencies’ underlying statutory authority, will be dependent on the extent of their willingness to pursue these stated policies, and will likely be tested in court.

Legislative Recommendation (Section 8)

The order directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to draft a legislative recommendation for Congress establishing a “uniform Federal policy framework for AI” that preempts conflicting state laws. Notably, the EO carves out several areas where state AI laws should not be preempted, including child safety protections, AI compute and data center infrastructure, and state government procurement and use of AI. Given its recent failures to pass a moratorium on state AI laws, whether Congress will be able to enact this framework remains to be seen.

Which State Laws Are Likely Targets

While the Commerce Department’s evaluation will provide the administration’s official list of laws that it considers to be onerous and in conflict with the EO’s policy, we can expect state AI laws that do one or more of the following to attract heightened scrutiny: 

  • Impose model transparency or incident reporting obligations, such as California’s AB 2013 (Artificial Intelligence Training Data Transparency) and SB 53 (Transparency in Frontier Artificial Intelligence Act);
  • Mandate bias audits or impact assessments, or otherwise target algorithmic discrimination, such as Colorado AI Act and NYC Local Law 144; or
  • Create state-specific disclosure standards for AI outputs or capabilities, such as Utah’s SB 226 (Artificial Intelligence Consumer Protection Amendments) and California’s SB 243 (Companion Chatbots).

Conversely, state AI laws that focus on protecting children are unlikely to be targets under this EO.

What Are Truthful Outputs

A recurring theme in the executive order is the concept of “truthful outputs.” The order claims that some state laws “require AI models to alter their truthful outputs”. For example, it cites Colorado’s algorithmic discrimination law specifically as potentially forcing “false results in order to avoid a ‘differential treatment or impact’ on protected groups.”

Given this framing’s importance to the EO, it’s worth unpacking this characterization and understanding what state algorithmic discrimination laws tend to actually require.

First, this characterization suggests that AI models have some inherently truthful output. This is at odds with the traditional concerns of AI model outputs being prone to hallucinations and other inaccuracies absent careful data curation, model development and AI governance. It is also technically questionable given that nearly all leading AI models undergo extensive post-training phases aimed at aligning AI model outputs with expectations and standards posed by AI developers. Such post-training is critical to a model’s ability to answer questions, reason, and follow instructions. The idea that a model can have inherently truthful outputs derived from some universal training data corpus is one that is not aligned with the technical reality of how these models work.

Additionally, most state laws addressing algorithmic discrimination focus on process: testing AI systems for disparate outcomes across demographic groups, documenting potential discriminatory impacts, and implementing governance processes for high-risk. These requirements generally do not mandate that AI systems produce specific outputs or fabricate results. Analogous to anti-discrimination laws originally created for non-AI contexts, they require organizations to understand how systems perform across different populations, and determine whether any disparities are justifiable or should otherwise be addressed.

Most notably, the EO’s positioning of Section 5 of the FTC Act as a basis for preempting state anti-discrimination requirements is a significant reframing of how federal consumer protection law has previously intersected with AI fairness. Until this year, FTC guidance explicitly did the opposite – it warned companies that algorithmic bias and discrimination could constitute unfair or deceptive practices under Section 5 and generate liability. 

Practical Guidance for Companies

For organizations developing, deploying, or using AI systems, this executive order creates new regulatory uncertainty. Here are some key takeaways:

  • Keep complying with state laws. Until a court specifically enjoins a state law or Congress preempts it, your compliance obligations remain in full force. Don’t treat the executive order as providing a safe harbor to stop complying with state requirements.
  • Track Task Force litigation. Once the Task Force begins filing lawsuits, watch which specific state laws are challenged, what legal theories are deployed, and whether courts grant preliminary relief. Early cases will likely be indicative of how courts view these challenges.
  • Watch for the forthcoming Commerce Department report. When published by mid-March 2026, this report will identify which state laws the administration considers “onerous” and inconsistent with the EO’s policy. Laws on this list will likely face federal legal challenges and potential funding pressure. Companies already taking specific compliance measures with respect to any flagged laws should prepare for increased uncertainty.
  • Document your bias mitigation practices clearly. Given the EO’s focus on “truthful outputs,” companies subject to state algorithmic discrimination laws should be prepared to explain, clearly and consistently, that bias testing and mitigation involves measurement, calibration, and governance rather than manufacturing falsehoods or embedding predetermined outcomes. Legal and technical teams should align on how to document these processes internally.
  • Watch for state countermoves and agency rulemaking. Some states, including California and New York, have already signaled they will vigorously defend their AI laws. Meanwhile, the FTC and FCC rulemaking processes will create opportunities for public comment and potential litigation over agency authority.
  • Consider the protected areas – particularly child safety. The executive order specifically excludes child safety protections, state AI procurement rules, and certain AI infrastructure from its proposed preemption framework. AI applications in these categories are less likely to be subject to federal pressures to minimize state AI laws. In particular, regulatory scrutiny of the interaction between AI and child safety will almost certainly prevail at all levels of government and should continue to be a core pillar of AI governance.

The executive order sets the stage for a contentious debate in 2026 over AI governance. For now, companies must navigate the reality that state laws remain in effect while preparing for evolving federal efforts to narrow or displace them. The coming months will clarify which state requirements face legal challenge – and whether the administration’s multi-front strategy can achieve its stated goal of establishing a unified national framework.