April 3, 2026

The White House’s AI Legislative Framework and the Unsettled Future of State AI Laws

This article is the forth installment in our series on the White House’s AI Action Plan and America’s Evolving AI Posture (the Plan). Read the introduction to our series here. In Part Three of the series, we looked at a US initiative to advance the export of full stack US AI. In Part Four, we examine the Administration’s National Policy Framework for Artificial Intelligence and its implications for federal preemption, state-level regulation, and AI governance.


In Part 1 of this series, we discussed how the White House’s July 2025 ”America’s AI Action Plan” signaled a pro-innovation posture at the federal level but did not prevent states from continuing to develop their own AI laws or reduce companies’ exposure to existing litigation, enforcement, and compliance risks. 1 This article builds on that discussion by examining the Trump administration’s March 20, 2026, “National Policy Framework for Artificial Intelligence Legislative Recommendations” and the central governance question it raises: Will Congress create a national AI standard that displaces portions of the current state-law patchwork? 2 3

The Trump administration’s new framework urges Congress to adopt a “minimally burdensome national standard” and to preempt state AI laws that impose “undue burdens.” 4 But the framework is still a legislative recommendation, not operative federal law. For companies making governance decisions now, that distinction matters. Until Congress acts, companies still must navigate existing state-law requirements, continued federal enforcement under current authorities, and uncertainty about how far any future preemption regime would go. 

The State-Law Landscape

Before assessing how the Administration’s framework could affect companies’ AI use, it is important to recognize the environment in which businesses operate today. While the White House is advocating for a more uniform national approach, state-level obligations already exist and will remain relevant unless and until Congress displaces them.

Colorado is the clearest example of a state adopting a comprehensive regime focused on high-risk AI. When the Colorado AI Act takes effect on June 30, 2026, it will impose risk management, impact assessment, and documentation and disclosure obligations on both developers and deployers of high-risk AI systems, including those who deploy systems used to make or materially influence consequential decisions in areas such as employment, housing, lending, insurance, healthcare, and education. 5

California is also developing its own AI-related legislation but through a more fragmented model. Rather than enact a single comprehensive statute, California has adopted a number of separate provisions touching employment, transparency, healthcare, and automated decision-making. Many of those requirements are already in effect or will become effective later this year. 6

Other states continue to move in their own directions. Texas has advanced broader AI legislation, while Illinois and New York have enacted or developed more targeted requirements tied to specific use cases or sectors. Utah and New Jersey are also considering or applying broader consumer protection concepts in the AI context. 7

For legal and compliance teams, this diversity of approaches is the immediate challenge. Companies are not operating from one national rulebook. They are instead navigating a mix of comprehensive state statutes, narrower disclosure and employment requirements, and generally applicable state laws that may still address AI-related conduct. Even if Congress ultimately moves toward a more unified federal standard, existing state obligations remain relevant unless and until they are displaced. 

Why the Framework Matters Now

It is important to distinguish between executive policies, legislative proposals, and operative law. The March 2026 framework is an executive policy that makes a set of recommendations to Congress. It builds on the President’s December 11, 2025, executive order, which articulated a national policy aimed at avoiding a fragmented patchwork of state AI requirements. 8

The March framework goes further by recommending that Congress preempt state AI laws that impose undue burdens while preserving certain categories of state authority. 9  Specifically, the framework states that a federal standard should not preempt states’ traditional police powers to enforce laws of general applicability, including laws designed to protect children, prevent fraud, and protect consumers. 10  At the same time, the framework says that states should not be permitted to regulate “AI development” or penalize AI developers for a third party’s unlawful conduct involving their models. 11  In that respect, the framework does more than advocate for a general federal standard.

By articulating its preferred approach in more concrete terms, the Administration has increased the importance of corporate planning around AI governance and compliance. It has also sharpened the debate over preemption, liability, and how much room states will retain to regulate AI-related harms.

The Present Reality

The framework’s practical effect will depend on Congress, and Congress’s disposition remains uncertain. On the same day that the White House released the framework, Representative Don Beyer and other Democratic lawmakers announced the GUARDRAILS ACT, meant to repeal the December 2025 executive order. 12 According to Representative Beyer’s release, he introduced the proposed GUARDRAILS Act in direct response to the Administration’s effort to displace state-level AI restrictions and moratoria. 13 

But there are competing bills headed in very different directions. On March 18, Senator Marsha Blackburn introduced a separate Senate proposal entitled “TRUMP AMERICA AI Act.” 14 15  The proposed bill would create federal liability provisions for AI developers and deployers and address a wide range of issues, including chatbot duty-of-care obligations, child safety controls, content provenance, copyright, and training data accountability. 16 Blackburn’s bill illustrates that even legislation aligned in theme with the Administration’s broader AI agenda may take a materially different regulatory approach.

The jockeying in Congress suggests that a clean, light-touch federal preemption statute is not imminent. Even among policymakers who favor a stronger federal hand, it is unclear how federal oversight will look in practice, particularly on liability, copyright, and the extent to which generally applicable laws should continue to apply to AI-related conduct.

For now, the more immediate reality is the continued tension between a federal policy agenda favoring lighter-touch regulation and preemption, and state efforts to address AI-related harms through both targeted statutes and generally applicable legal authorities.

Practical Implications for Governance

Although companies do not need to know the final direction of federal AI policy before acting, they do need governance structures that can endure shifting state requirements, federal priorities, and enforcement theories.

First, companies should maintain an AI governance structure anchored in core controls rather than in any single pending state or federal proposal. That structure should be capable of adapting as state requirements evolve, federal priorities shift, and courts clarify the scope of any eventual preemption regime. For example, a company might establish a consistent internal process for classifying higher-risk AI uses, documenting testing, requiring human review where appropriate, and setting clear approval procedures, rather than rebuilding its AI governance approach each time a new law or policy emerges.

Second, companies should apply greater diligence to vendors and third-party AI tools. The White House framework suggests a more protective view of AI development, 17 but the broader legislative landscape still points in different directions. Senator Blackburn’s proposed bill, for example, would take a harder line on issues such as liability, copyright, provenance, and duty of care, including for companies deploying third-party AI products.18 For that reason, companies should not assume that using a third-party model or application reduces their risk. Organizations should continue to scrutinize diligence, contracting terms, data provenance, audit rights, testing expectations, and escalation procedures carefully before adopting and deploying third-party AI tools.

Third, companies should carefully review their own public statements about AI. Marketing language, investor presentations, product claims, and governance representations should be tested against technical reality, documentation, and actual internal controls. Legal and compliance teams should ensure that they are able to answer the question, “If a regulator, investor, or plaintiff challenged this statement, could the company demonstrate its accuracy through documents, processes, and practices?”

More broadly, companies should ensure that their AI governance structures are capable of withstanding scrutiny when higher-risk issues arise. That includes clear ownership, up-to-date documentation, and defined procedures for elevating higher-risk issues to legal, compliance, or other appropriate decision-makers. Those needs are not limited to abstract compliance planning. They increasingly map to the kinds of disputes, investigations, disclosure reviews, governance assessments, and litigation readiness exercises that organizations may need to undertake under privilege and in coordination with counsel as AI-related risk matures. A&M’s AI Litigation, Enforcement, and Compliance Risk framework is built around that distinction, pairing proactive work such as AI claims and disclosures, risk reviews, and governance evaluations with reactive support for investigations, regulatory response, and litigation readiness.19 

Conclusion

The Administration’s March 2026 National Policy Framework outlines the Administration’s preferred path forward. That path includes a lighter-touch federal regime, meaningful limits on overly burdensome state AI laws, and an ongoing emphasis on innovation, deployment, and competition. 20 But the framework does not itself displace the current compliance landscape. 

For now, companies remain subject to an uneven mix of state requirements, federal enforcement through existing authorities, and political uncertainty over whether Congress will adopt the White House’s approach. Companies will be better positioned if they treat this period not as a reason to pause their AI governance efforts, but as a reason to build programs that are adaptable, well documented, and resilient no matter whether regulatory priorities shift—and capable of supporting both proactive risk management and reactive response when scrutiny arrives.


Citations

  1. Steve Spiegelhalter and Cameron Radis, "The AI Action Plan and What It Means for US Governance Going Forward," Alvarez & Marsal, July 23, 2025.
  2. The White House, "A National Policy Framework for Artificial Intelligence Legislative Recommendations," March 20, 2026.
  3. The White House, "President Donald J. Trump Unveils National AI Legislative Framework," Press Release, March 20, 2026.
  4. The White House, "A National Policy Framework for Artificial Intelligence Legislative Recommendations."
  5. Orrick, "U.S. AI Law Tracker."
  6. Ibid.
  7. Ibid.
  8. The White House, "Ensuring a National Policy Framework for Artificial Intelligence," Executive Order, December 11, 2025.
  9. The White House, "A National Policy Framework for Artificial Intelligence Legislative Recommendations."
  10. Ibid.
  11. Ibid.
  12. U.S. Representative Don Beyer, "Beyer, Matsui, Lieu, Jacobs, McClain Delaney Introduce Legislation to Repeal White House AI Moratorium," Press Release, March 20, 2026.
  13. Ibid.
  14. Jeffrey M. Kelly, "The White House Releases National AI Legislative Framework," Nelson Mullins, March 20, 2026.
  15. U.S. Senator Marsha Blackburn, "Blackburn Releases Discussion Draft of National Policy Framework for Artificial Intelligence," Press Release, March 18, 2026.
  16. Ibid.
  17. The White House, "A National Policy Framework for Artificial Intelligence Legislative Recommendations."
  18. U.S. Senator Marsha Blackburn, "Blackburn Releases Discussion Draft of National Policy Framework for Artificial Intelligence."
  19. Cameron Radis and Jonathan Marshall, "AI Litigation, Enforcement, and Compliance Risk: A Structured Response Framework – Q4 2025 Regulatory Update," Alvarez & Marsal, November 13, 2025.
  20. Seung Min Kim and Matt O’Brien, "White House urges Congress to take a light touch on AI regulations in new legislative blueprint," Associated Press, updated March 20, 2026.
Authors
FOLLOW & CONNECT WITH A&M