Back to Posts

National AI Policy Framework

March 23, 2026

An executive with hand holding a fountain pen over a document called national AI policy framework. A pair of glasses and another pen sit on the desk.

The Trump administration released a National Policy Framework for Artificial Intelligence (PDF) on March 20, 2026. It's a four-page legislative blueprint sent to Congress outlining how the federal government wants to govern AI in America.

The document isn't law yet. But it signals the direction of federal AI policy, sets the terms of a coming congressional fight, and raises an immediate question for every business currently building compliance programs around state AI laws: should you keep going?

The short answer is yes. Here's why, and what the framework actually says.

Contact STACK to discuss where your current AI governance program stands.

What the Framework Is

The framework fulfills a directive from President Trump's Dec. 11, 2025, executive order titled Ensuring a National Policy Framework for Artificial Intelligence, which tasked White House science and technology adviser Michael Kratsios and AI czar David Sacks with developing a national legislative recommendation. The administration wants Congress to convert the framework into law "this year," Kratsios told Fox News the day before its release.

That timeline is ambitious. Federal preemption of state AI laws has failed three times in the past year. It was stripped from the One Big Beautiful Bill budget reconciliation package by a 99-1 Senate vote, excluded from the 2026 National Defense Authorization Act after bipartisan opposition, and blocked by Republican governors including Ron DeSantis and Sarah Huckabee Sanders.

More than 50 Republican state lawmakers sent a letter to Trump earlier this month expressing concern that federal preemption strips states of sovereignty. The framework is now the administration's fourth attempt to establish federal supremacy over AI regulation.

Seven Pillars

The framework organizes its legislative recommendations into seven areas. Child safety comes first: Congress should require AI platforms to implement safeguards against sexual exploitation and self-harm, give parents tools to manage children's privacy and screen time, and establish age-assurance requirements. The framework preserves state authority to enforce existing child protection laws, which has become a primary concession to win Republican support.

On communities, the framework asks Congress to prevent residential electricity rate increases from AI data center construction, streamline federal permitting for AI infrastructure, and strengthen law enforcement tools against AI-enabled scams targeting seniors and other vulnerable populations. It also asks Congress to provide small businesses with grants, tax incentives, and technical assistance to deploy AI tools — a provision worth noting for any small business owner trying to understand how this affects them directly.

On intellectual property, the administration takes a careful position. It believes training AI models on copyrighted material doesn't violate copyright law, but acknowledges the argument to the contrary exists and wants courts to resolve it instead of Congress.

Congress is asked to explore voluntary licensing frameworks and to establish federal protections against unauthorized use of individuals' voice, likeness, or other identifiable attributes, with clear carve-outs for parody, satire, and news reporting.

The anti-censorship section asks Congress to prohibit the federal government from coercing AI platforms to moderate content based on partisan or ideological agendas, and to give Americans an avenue to seek redress if government agencies attempt to influence what AI platforms say. The innovation section calls for regulatory sandboxes, access to AI-ready federal datasets, and no new federal AI regulatory body. Oversight would remain with existing sector regulators, not a new dedicated agency.

The workforce section asks Congress to use non-regulatory methods to integrate AI into existing education and training programs and to study AI-driven job displacement at the task level to inform policy. The final and most consequential section addresses preemption of state AI laws.

Remove Undue Burdens

The framework's center of gravity is its call for Congress to preempt state AI laws that "impose undue burdens" in favor of a single minimally burdensome national standard. But the preemption framework is more nuanced than the headline suggests, and understanding the carve-outs matters as much as understanding the mandate.

The framework explicitly says Congress shouldn't preempt state traditional police powers. This means states can still enforce laws of general applicability against AI developers and users, including consumer protection laws, fraud laws, and laws protecting children.

States retain authority over zoning for AI infrastructure. States can still regulate their own government's use of AI, including in law enforcement and public education.

What the framework says states shouldn't be permitted to do: regulate AI development itself, because "it is an inherently interstate phenomenon with key foreign policy and national security implications." States shouldn't unduly burden Americans' use of AI for activity that would otherwise be lawful. And states shouldn't penalize AI developers for a third party's unlawful conduct involving their models.

That last provision is the most contested. Critics argue it would shield AI developers from accountability even when their tools are used to generate deepfakes, enable fraud, or harm consumers; as long as a third party, not the developer, is the proximate bad actor. Supporters argue it prevents developers from being held responsible for conduct they didn't direct and couldn't control, similar to the rationale behind Section 230 for social media platforms.

Political Challenges

The same day the White House released its framework, House Democrats introduced the GUARDRAILS Act (the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act) which would explicitly prohibit federal preemptive action on state AI laws.

Sen. Brian Schatz of Hawaii filed companion legislation in the Senate. The partisan counter-move arrived within hours of the administration's announcement.

On the Republican side, Sen. Marsha Blackburn released a companion draft bill called the TRUMP AMERICA AI Act, which attempts to package preemption with child safety provisions and other bipartisan-friendly elements specifically to build enough support to overcome previous opposition. That strategy acknowledges the political problem directly: preemption alone has no path to 60 votes in the Senate. Bundling it with child safety is the attempt to find one.

The administration also missed its own March 11 deadline. The December executive order required the Commerce Secretary to publish a list of "onerous" state AI laws that the administration considers targets for challenge. As of March 21, that list has not been released. The absence matters: businesses cannot know which state laws the administration intends to challenge without it.

What This Means for Your Compliance Plans

The framework changes the political landscape but not the legal one. Every state AI law currently on the books remains enforceable today. The framework is a recommendation, not a statute. Even if Congress acts, a final law would need to be signed, and its preemption provisions would face immediate legal challenge from states asserting their authority. Note analysts across the political spectrum say this is unlikely to happen before the November midterm elections.

The practical guidance from virtually every major law firm that has analyzed the situation is consistent: proceed as if state laws will remain in effect in the near term, while monitoring federal developments closely. As Ropes & Gray noted in their analysis, preemption is not self-executing. Unless and until a court invalidates a specific state law on preemption grounds, compliance with that law is still required.

For businesses in STACK's client base, the near-term compliance calendar looks like this. Texas's TRAIGA has been in effect since January 1, 2026, and requires compliance now. Colorado's AI Act is scheduled for June 30, 2026 — though a working group just reached consensus on a significant rewrite that still needs to pass the legislature before the session ends in May. California's AI transparency mandates are rolling out in stages. The EU AI Act's full high-risk regime takes effect August 2, 2026 for companies with EU operations.

Defense contractors face an additional layer of consideration. CMMC compliance requirements for controlled unclassified information apply regardless of what happens to state AI laws. Using consumer AI platforms to discuss contract performance, incident response, or compliance matters creates data protection exposure that exists independently of the federal-versus-state regulatory debate.

Compliance Paradox

Here's the practical reality the framework creates: the underlying requirements across virtually every AI governance framework — federal, state, and international — converge on the same four things. Know what AI systems you're using. Document how they work and what decisions they influence. Protect the data they process. Have a plan for when something goes wrong.

Whether the final law is the White House framework, Colorado's rewritten AI Act, Texas's TRAIGA, or the EU AI Act, a business that has inventoried its AI tools, documented its governance policies, implemented appropriate data protections, and aligned with the NIST AI Risk Management Framework will be better positioned than one that waited to see which law survives. The frameworks change. The underlying requirements don't.

Small Business Angle

One provision in the framework that deserves more attention than it's received: the call for Congress to provide AI resources to small businesses through grants, tax incentives, and technical assistance programs. This is a direct acknowledgment that the compliance and adoption burden of AI governance falls disproportionately on smaller companies that lack the legal and technical infrastructure of large enterprises.

If enacted, these provisions could create meaningful support for smaller defense contractors, health care providers, and manufacturers — exactly the businesses most exposed to both the competitive opportunities and the compliance risks that AI creates. Whether Congress acts on this provision depends on the same political dynamics that affect everything else in the framework.

What to Watch For

Several near-term developments will signal whether the framework has real legislative momentum or remains an aspirational document. The Commerce Secretary's overdue list of "onerous" state AI laws, required by the December executive order, will reveal which specific state provisions the administration intends to challenge and how aggressively it plans to use the DOJ's AI Litigation Task Force. Colorado's legislature has until May 13 to pass the working group's rewrite of the state's AI Act before the June 30 effective date. Any movement on the Blackburn bill or a companion House measure in committee will indicate whether Congressional Republicans can hold together a majority.

The midterm elections in November add a deadline that concentrates legislative minds. Passing a controversial federal preemption bill in an election year requires either broad bipartisan support, which the child safety provisions are designed to generate, or a level of Republican unity that has not materialized on this issue in three previous attempts.

Shaping Legislative Debate

The White House framework represents the most comprehensive federal AI governance proposal to date, and it will shape the legislative debate for the rest of 2026. Its light-touch approach — no new regulatory agency, sector-specific oversight, innovation-first orientation — reflects the administration's consistent position on AI since taking office.

But "shaping the debate" and "becoming law" are different things. Three previous preemption attempts failed with significant bipartisan opposition. The political coalition needed to pass this framework does not yet exist, and the same Republican state lawmakers who killed the previous attempts remain in their offices, with the same concerns about state sovereignty.

For business owners and compliance teams, the framework is important context, not a reason to pause. State laws are in effect. Federal law is not. Build your governance program toward the requirements you face today, and structure it to adapt as the federal picture becomes clearer.

STACK Cybersecurity helps businesses navigate AI governance requirements across state and federal frameworks, including CMMC alignment, vendor risk assessments, and AI use policy development. Contact us to discuss where your AI governance program stands.

Sources

The White House. "A National Policy Framework for Artificial Intelligence: Legislative Recommendations." March 20, 2026.

The White House. "President Donald J. Trump Unveils National AI Legislative Framework." March 20, 2026.

Sullivan & Cromwell LLP. "Trump Administration Releases National Policy Framework on Artificial Intelligence." March 20, 2026.

Ropes & Gray LLP. "Examining the Landscape and Limitations of the Federal Push to Override State AI Regulation." March 2026.

Roll Call. "White House AI Framework Calls for Preemption of State Laws." March 20, 2026.

CNBC. "Trump Administration Unveils National AI Policy Framework to Limit State Power." March 20, 2026.

Tech Policy Press. "Trump and GOP Lawmakers Push for New National AI Legislation." March 20, 2026.

Biometric Update. "Senate Republicans Press National AI Framework to Preempt States." March 20, 2026.

Paul Hastings LLP. "President Trump Signs Executive Order Challenging State AI Laws." Dec. 2025.

Nextgov/FCW. "Tech Bills of the Week: Anti-AI Moratorium Efforts; Supporting Small AI Businesses." March 20, 2026.

Cybersecurity Consultation

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cybersecurity's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Explore our Risk Assessment