This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 3 minute read

White House Publishes AI Legislative Framework to Preempt State AI Regulation

On March 20, 2026, the White House released high-level recommendations for a “National AI Legislative Framework” for Artificial Intelligence (AI), following President Trump’s December 11, 2025 executive order directing preparation of legislative recommendations to establish a uniform federal AI policy and preempt conflicting state laws (see our advisory for details on that EO). 

The framework’s central thrust is two-fold: preempt “unduly burdensome” state AI regulation, and channel federal oversight through existing sector-specific federal agencies (e.g. FTC, FCC, SEC) and through industry-led standards. It explicitly rejectscreating a new federal AI regulatory body. 

The proposal should be viewed in light of growing political complexity around AI. It recommends carve-outs for a range of state enforcement powers and calls on Congress to consider enhanced federal protections in specific areas: child safety, content creator rights, electricity ratepayer protections, and anti-fraud enforcement. It also includes a broad prohibition on federal government “coercion” on technology providers regarding content, a provision that extends well beyond AI.

Seven Legislative Priorities

The White House framework proposal addresses seven legislative key objectives, with proposals including:

1. Preemption of state AI laws:  the framework recommends preempting state AI laws that impose “undue burdens” on AI, framed as protecting U.S. competitiveness while “respecting federalism and State rights.”  The proposed preemption would prohibit states from (1) regulating AI development, (2) “unduly burden[ing]” the use of AI for otherwise-lawful activities, and (3) imposing liability on AI developers for third-party misuse of their models. 

Proposed carveouts would preserve states’ traditional police powers in a number of areas, including laws protecting children, preventing fraud, and protecting consumers, applied to both AI developers and users.  States would also retain regulatory authority over their own AI procurement and use, and over zoning and siting of AI infrastructure. The scope of preemption will be heavily contested and is likely to generate significant litigation.

2. Child safety: the framework recommends mandating “robust” parental controls, covering privacy settings, screen time, content exposures, and account management, and requiring “commercially reasonable, privacy protective age-assurance requirements (such as parental attestation)” for AI platforms and services. State child protection laws, such as those applicable to AI-generated child sexual abuse material (CSAM), would not be preempted.

3. Community and energy impacts: the framework proposes that Congress ensure AI infrastructure development does not increase electric utility rates for ratepayers, potentially by codifying or enabling President Trump’s voluntary “Ratepayer Protection Pledge” for AI companies to “build, bring, or buy” new power generation resources for their data centers.  It recommends streamlining permitting for on-site and behind-the-meter power generation at data centers, in addition to augmenting law enforcement tools to address AI-enabled scams and fraud.

4. Intellectual property: the framework states that the “Administration believes that training of AI models on copyrighted material does not violate copyright,” but acknowledges the unsettled legal landscape and defers to the courts to resolve the issue.  In parallel, it proposes Congress consideration be given to enabling licensing frameworks or collective rights systems allowing rights holders to negotiate compensation from AI developers, although it does not take a position on whether these frameworks should be compulsory. It also recommends strengthening individual protections against unauthorized AI-generated replicas of voice, likeness, or other identifiable attributes.

5. Preventing censorship: the framework proposes a federal prohibition on government “coercion” of technology providers (including but broader than AI providers) to ban, compel, or alter content based on “partisan or ideological agendas.” It also recommends a “redress” mechanism for individuals to challenge agency efforts to censor or “dictate” information provided by an AI platform.

6. Regulation approach and innovation: consistent with its rejection of a new AI-specific regulatory body, the framework supports sector-specific agency oversight through existing agencies and industry-developed standards.  It calls for regulatory sandboxes to facilitate novel AI applications and for making federal datasets available in AI-ready formats for model training.

7. Education and workforce: the framework stresses that “American workers must benefit from AI-driven growth.”  The framework proposes systematic study of AI’s task-level workforce impacts and investment in AI skills training through educational and workforce development programs.

Key Considerations

These recommendations represent an opening position in what will be a protracted federal AI policy debate. Several dynamics warrant close attention:

  • Legislative fragmentation is likely. The framework’s seven priorities are unlikely to advance as a cohesive package. Individual provisions may be folded into sector-specific bills or broader legislative vehicles, requiring companies to track multiple legislative tracks simultaneously.
  • State law remains operative. Until and unless federal preemption is enacted, existing state AI laws, including those in or emerging in California, Colorado, and other active jurisdictions, remain fully in force. States are likely to mount strong opposition to federal preemption efforts.
  • Regulatory tension is building. Ongoing federal agency activity (e.g., FTC, FCC) may produce policy orientations that conflict with certain state law requirements, particularly around bias mitigation and safety standards, creating near-term compliance complexity.
  • Political salience is rising. With the November 2026 midterms approaching, public concerns over AI’s impact on jobs, energy costs, and child safety will intensify legislative attention. Provisions with bipartisan appeal, particularly on child safety and ratepayer protection, may advance ahead of others. 

Companies should monitor legislative developments closely, and evaluate engagement opportunities, including through industry trade associations, as specific proposals move through the legislative process. 

To receive the latest insights on US legal developments, subscribe to the Freshfields A Fresh Take Blog.

Tags

ai, political change, us