Back to Posts

Comprehensive List of State AI Laws

Sept. 21, 2025

Michigan Capitol Rotunda

Last Updated: Jan. 30, 2026

Editor's Note: This post has been updated to reflect the Trump administration's December 2025 executive order attempting to block state AI regulations and new state laws taking effect in January 2026.

Across the United States, state legislatures have moved decisively to fill the regulatory void left by limited federal action on artificial intelligence (AI). In 2025 alone, 1,208 AI-related bills were introduced across all 50 states, with 145 enacted into law. According to the National Conference of State Legislatures (NCSL), 38 states adopted or enacted around 100 AI-related measures in 2025. These state-level initiatives reveal several distinct approaches to AI governance, with regulations ranging from comprehensive to narrowly targeted. For the most current legislative tracking, the NCSL Artificial Intelligence 2025 Legislation database provides comprehensive updates.

Key Compliance Dates at a Glance

Effective Date Law Jurisdiction Primary Focus
Jan. 1, 2026 AB 2013 California Training data transparency for generative AI
Jan. 1, 2026 SB 53 (TFAIA) California Frontier AI safety reporting ($500M+ developers)
Jan. 1, 2026 TRAIGA Texas Generative AI disclosures, deployer inventory
Jan. 1, 2026 HB 3773 Illinois AI in employment decisions, BIPA updates
June 30, 2026 SB 24-205 Colorado Algorithmic discrimination, impact assessments
Aug. 2, 2026 SB 942 California AI content transparency (1M+ monthly visitors)
Aug. 2, 2026 EU AI Act European Union High-risk AI system requirements
Jan. 1, 2027 RAISE Act New York Financial services AI governance, 72-hour incident reporting

Note: Dates are subject to change. Several states have delayed implementation to refine requirements. Check individual state resources for the most current information.

Quick Navigation

Risks Driving Regulatory Urgency

The regulatory landscape has been influenced by growing concerns about AI's potential long-term risks. In May 2023, over 350 AI executives, researchers, and engineers signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Signers included leaders from OpenAI, Google DeepMind, and Anthropic. This unprecedented warning from the very architects of advanced AI systems has added urgency to regulatory discussions.

A 2024 report commissioned by the U.S. State Department concluded advanced AI systems could, in a worst-case scenario, "pose an extinction-level threat to the human species," based on interviews with executives from leading AI companies, cybersecurity researchers, and national security officials. These high-profile warnings have accelerated debate about appropriate governance frameworks to address both near-term harm and long-term safety concerns.

California: Leading with Comprehensive Regulation

California remains the most active state in AI regulation, enacting 24 AI-related laws across the 2024 and 2025 legislative sessions. While Gov. Gavin Newsom vetoed several high-profile bills, including SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) and SB 7 (No Robo Bosses Act), the state has pursued targeted regulations across multiple domains.

Check out a comprehensive list of California AI regulations.

Transparency Requirements

The California AI Transparency Act (SB 942) mandates AI systems publicly accessible within California with more than 1 million monthly visitors implement comprehensive measures to disclose when content has been generated or modified by AI. This Act establishes requirements for AI detection tools and content disclosures, with penalties of $5,000 per violation per day for noncompliance. The effective date has been delayed from Jan. 1, 2026, to Aug. 2, 2026, following passage of AB 853.

The Generative Artificial Intelligence Training Data Transparency Act (AB 2013), effective Jan. 1, 2026, requires developers of generative AI systems intended for public use in California to publish high-level information about the training data used, including dataset summaries, intellectual property and privacy flags, and processing history.

California enacted SB 53 (Transparency in Frontier Artificial Intelligence Act) in September 2025, targeting large frontier developers with annual revenue exceeding $500 million. The law requires disclosure of risk management protocols and transparency reports about frontier models, with reporting requirements for critical safety incidents and whistleblower protections for employees who report safety concerns. The law took effect Jan. 1, 2026.

Companion Chatbot Regulation

The state also passed SB 243 regulating companion chatbots, requiring operators to disclose when users are interacting with AI rather than humans and maintain protocols to prevent production of self-harm content. The law applies to AI systems "capable of meeting a user's social needs" and requires that reasonable persons would not be misled into believing they're interacting with a human. California provides an exception for chatbots used only for customer service.

Deepfakes and Explicit Content

California has approved numerous regulations addressing deepfakes and explicit content. SB 926 criminalizes the creation or distribution of AI-generated sexually explicit images with intent to cause serious emotional distress. SB 981 requires social media platforms to establish reporting mechanisms for deepfake nudes, with requirements to temporarily block such content during investigation and permanently remove it if confirmed. AB 1831 expands child pornography laws to include AI-generated content. AB 1836 protects digital replicas of deceased performers from unauthorized AI reproduction.

Election Integrity

For election integrity, AB 2655 (Defending Democracy from Deepfake Deception Act) requires large online platforms to block or label deceptive AI-generated content related to elections, while AB 2839 prohibits distribution of materially deceptive election content. However, a federal judge blocked AB 2839 in 2024 on First Amendment grounds, leading most other states to adopt disclosure requirements rather than outright prohibitions. AB 2355 mandates that political advertisements using AI-generated content include clear disclosures.

Colorado: Risk-Based Framework for Algorithmic Discrimination

Colorado has been at the forefront of AI regulation with its landmark Colorado Anti-Discrimination in AI Law (SB 24-205), enacted May 17, 2024. This comprehensive framework focuses on protecting consumers from algorithmic discrimination in high-risk AI systems that make consequential decisions affecting employment, housing, education, health care, financial services, government services, insurance, and legal services.

Check out our Colorado AI Act Compliance Guide for information on developer and deployer requirements, impact assessments, consumer notices, exemptions, and compliance preparation steps.

The law imposes a duty of reasonable care on both developers and deployers, requiring steps to protect against discrimination based on protected characteristics. Developers must provide documentation about data sources, limitations, and risk mitigation strategies, while deployers must conduct impact assessments, provide notice to consumers, and establish appeal processes for adverse decisions.

Colorado makes a strong push for use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework for governance. A violation constitutes a violation of Colorado's Unfair and Deceptive Trade Practices Act.

The effective date has been postponed multiple times to allow for further refinement. In August 2025, Governor Jared Polis signed SB 25B-004, delaying implementation from February 1, 2026, to June 30, 2026. Previously, Colorado passed SB 21-169 in 2021, addressing AI use in insurance underwriting and prohibiting insurers from using external consumer data and algorithms in ways that unfairly discriminate based on protected characteristics.

Tennessee: The ELVIS Act Protects Voice and Likeness

Tennessee made history on March 21, 2024, becoming the first state to enact comprehensive legislation protecting musicians and individuals from unauthorized AI voice cloning. The Ensuring Likeness Voice and Image Security (ELVIS) Act, effective July 1, 2024, expands the state's existing right of publicity law to explicitly include voice protection against AI-generated replicas.

The law defines "voice" broadly as "a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice." The ELVIS Act creates both civil and criminal penalties, with violations constituting a Class A misdemeanor punishable by up to one year in jail and fines up to $2,500.

Notably, the Act targets not only those who create unauthorized voice replicas but also technology providers, creating liability for anyone who "distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology" whose primary purpose is creating unauthorized voice or likeness replicas. This provision potentially subjects AI platform providers to liability, marking a significant expansion in accountability for technology companies.

Utah: First Comprehensive Consumer Protection AI Law

Utah became the first U.S. state to enact major AI consumer protection legislation when Gov. Spencer Cox signed SB 149 (the AI Policy Act) on March 13, 2024, taking effect May 1, 2024. The law requires entities using generative AI to interact with consumers in commercial activities to provide clear and conspicuous disclosure.

In 2025, Utah significantly narrowed the law's scope through SB 226, which amended requirements to apply only when directly asked by consumers or during high-risk interactions involving health, financial, or biometric data collection. This represents a safe harbor for entities that disclose AI use at the outset and throughout interactions. For regulated occupations, individuals providing services must disclose generative AI use only in high-risk interactions.

SB 332 extended the law's effectiveness until July 1, 2027 (originally set to expire in 2025). Additionally, HB 452 introduced specific regulations for AI-supported mental health chatbots, including bans on advertising products during user interactions and prohibitions on sharing users' personal information. The law requires providers to make clear and conspicuous disclosures to users at several points in time, including prior to initially accessing the chatbot, when a user revisits the chatbot after not using it for more than seven days, and when asked by the user.

Texas: Responsible AI Governance

Texas enacted the Responsible Artificial Intelligence Governance Act (TRAIGA) through HB 149, which took effect Jan. 1, 2026. The law prohibits intentionally developing or deploying AI systems to incite or encourage harm to self or others, engage in criminal activity, infringe on constitutional rights, engage in unlawful discrimination against protected classes in violation of state or federal law, or produce deepfakes or child pornography.

The enacted version of TRAIGA replaced an earlier proposal that was similar to Colorado's AI Act. Texas also established an AI advisory council through SB 1893 and enacted HB 2060, requiring impact assessments for AI systems used in public services.

Employment and Health Care AI Regulations

Illinois

Illinois has been an early mover in AI regulation, particularly in employment contexts. The Illinois Artificial Intelligence Video Interview Act (820 ILCS 42), effective since January 2020, requires employers who use AI to analyze video interviews to notify applicants before the interview that AI may be used, explain how the AI works and what characteristics it evaluates, obtain consent from applicants, limit sharing of videos, and delete videos upon request within 30 days.

In August 2024, Illinois enacted HB 3773, which amends the Illinois Human Rights Act to regulate AI in employment more broadly. Effective Jan. 1, 2026, the law prohibits employers from using AI that has a discriminatory effect on employees based on protected characteristics and requires notice to employees when AI is used for recruitment, hiring, promotion, and other employment decisions.

Illinois has also acted on AI in health care, becoming the first state to ban the commercial use of AI therapy chatbots that could mislead users about the nature of mental health services.

Indiana

Indiana enacted requirements for health care professionals and insurers to disclose to patients when AI is used in health care decisions or communications. The law places similar disclosure requirements on health care insurers when AI systems influence coverage or treatment determinations.

New York

New York City pioneered regulation of AI in hiring with its Automated Employment Decision Tools (AEDT) law (Local Law 144), which began enforcement on July 5, 2023. The law requires employers and employment agencies using AI tools for hiring or promotion decisions to conduct an annual bias audit by an independent auditor, publish a summary of the results on their website, notify job candidates and employees about AI use at least 10 business days before deployment, and disclose the job qualifications and characteristics being evaluated.

The law applies to computational processes derived from machine learning, statistical modeling, or data analytics that substantially assist or replace human decision-making in employment contexts.

At the state level, New York has now expanded its AI governance framework with the Responsible AI Safety and Education (RAISE) Act (A.6453 / S.6953). The RAISE Act passed the New York State Legislature in June 2025 and was signed into law by Gov. Kathy Hochul on Dec. 19, 2025, following extensive negotiations with legislative leaders.

As enacted, the RAISE Act establishes transparency and risk-management safeguards for frontier AI models. The law takes effect Jan. 1, 2027, following chapter amendments expected to be approved in January 2026 that will align the law more closely with California's TFAIA. Covered developers include those with annual revenues exceeding $500 million who develop frontier models trained using greater than 10²⁶ computational operations.

The RAISE Act requires large developers to publish and follow a safety plan, report critical AI safety incidents to state authorities within 72 hours of determining an incident occurred, and refrain from releasing models that fail their own safety testing. This 72-hour reporting window is significantly stricter than California's 15-day requirement. The law creates an oversight office within the Department of Financial Services to assess large frontier developers and issue annual reports.

Civil penalties following chapter amendments will be up to $1 million for first violations and up to $3 million for subsequent violations, enforced by the Attorney General. Both OpenAI and Anthropic expressed support for the RAISE Act, with OpenAI noting that having similar legislation in two large state economies is a positive step for the policy landscape.

New York also enacted S 3008 in 2025, establishing disclosure requirements for "personalized algorithmic pricing," making it one of the first states to regulate AI-driven pricing tools.

Washington

Washington's SB 5827, enacted in 2023, addresses algorithmic discrimination by prohibiting covered entities from discriminating against individuals through automated decision systems based on protected characteristics. The law requires reasonable efforts to test automated systems for algorithmic discrimination and establishes frameworks for transparency and accountability.

Michigan's Comprehensive Approach

Michigan has taken comprehensive action on AI regulation, particularly focusing on election integrity and protection from deepfakes. The state enacted a four-bill package (HB 5141, HB 5143, HB 5144, and HB 5145) in November 2023, effective Feb. 13, 2024, requiring disclaimers on AI-generated political ads and prohibiting deepfake political content within 90 days of an election unless clearly disclosed. The Michigan Campaign Finance Act (Section 169.259) now requires that any qualified political advertisement created using AI must include a clear statement about its AI-generated nature.

Michigan also passed HB 4047 and HB 4048, signed by Gov. Whitmer in August 2025, which criminalize the creation and distribution of nonconsensual intimate AI deepfakes, with enhanced penalties for cases involving extortion, harassment, or profit motives. The laws allow victims to take civil action and establish both criminal penalties and court-ordered restraining orders to prevent further harm.

In October 2024, the Michigan Civil Rights Commission passed a resolution establishing guiding principles for AI use in the state, calling for legislation to prevent algorithmic discrimination, protect privacy, and create a task force to monitor data collection practices.

Additional State Initiatives

Arkansas

Arkansas enacted multiple AI regulations in 2025. HB 1071 amends the state's Publicity Rights Protection Act to explicitly include AI-generated images and voice, originally created to strengthen publicity rights for student athletes. HB 1876 establishes ownership rights over content created by generative AI, clarifying that users who provide input to AI tools own the resulting content, provided it doesn't infringe on existing copyrights. HB 1958 requires public entities to develop comprehensive policies regarding the authorized use of AI and automated decision-making technology.

Montana

Montana passed SB 212 in 2025, establishing a "right to compute" that limits government restrictions on private ownership or use of computational resources. The law requires that any restrictions be narrowly tailored to fulfill a compelling government interest. It also mandates that critical infrastructure facilities controlled by AI systems develop risk management policies based on national or international AI risk management frameworks. Montana is one of four states (alongside Arkansas, Pennsylvania, and Utah) that passed digital replica laws in 2025 to protect digital identity and consent.

Pennsylvania enacted digital replica protections in 2025, joining Arkansas, Montana, and Utah in safeguarding individuals' digital likenesses from unauthorized AI reproduction. Kentucky enacted SB 4, directing the Commonwealth Office of Technology to create policy standards governing AI use.

Maryland passed HB 956 to form a group studying private sector AI and providing legislative recommendations. West Virginia passed HB 3187 to create a task force identifying AI opportunities and best practices for public sector use.

Maine passed the Chatbot Disclosure Act in June 2025, which requires businesses that use AI chatbots to communicate with consumers to notify those consumers they aren't interacting with a human in cases where a reasonable consumer couldn't tell the difference. Maine's law became effective Sept. 24, 2025, and is enforceable under the Maine Unfair Trade Practices Act.

Kansas and Oregon have prohibited the use of foreign-owned AI systems (including DeepSeek) on state computers, with Oregon also prohibiting AI systems from posing as licensed medical professionals.

Vermont's S.197 created an AI commission for policy development, and Act 89 addresses AI in insurance underwriting.

Virginia established an AI advisory council through HB 2360, incorporated AI protections into its Consumer Data Protection Act, and created a "regulatory reduction pilot" for AI governance.

Delaware established an agentic AI sandbox for governance experiments, positioning the state to address emerging questions around autonomous AI systems capable of planning and independent action.

Connecticut enacted SB 1103 regulating insurance AI use and SB 2 for generative AI in education.

Massachusetts passed H.5163 requiring hiring AI disclosures.

AI-Generated Child Sexual Abuse Material

One of the most widespread areas of AI regulation across states involves AI-generated or computer-edited child sexual abuse material (CSAM). As of 2025, 45 states have enacted laws criminalizing AI-generated CSAM, with many of these laws passed in 2024-2025 alone. The National Center for Missing and Exploited Children reported receiving 67,000 reports of AI-generated CSAM in 2024, and 440,000 in just the first half of 2025.

Only five states and Washington, D.C., have not yet criminalized AI-generated CSAM: Alaska, Colorado, Massachusetts, Ohio, Vermont, and the District of Columbia.

Political Deepfakes and Election Integrity

As of 2025, 28 states enacted laws specifically addressing deepfakes used in political communications. These laws generally fall into two categories: disclosure requirements and outright prohibitions. Most states have opted for disclosure requirements due to First Amendment concerns, after a federal judge blocked California's prohibition law (AB 2839) in 2024 on constitutional grounds.

In 2025 alone, 301 deepfake-related bills were introduced across states, with 68 enacted, primarily addressing sexual deepfakes through criminal or civil penalties. This represents one of the most active areas of AI legislation at the state level.

The Federal Moratorium Debate

A particularly contentious proposal that dominated AI policy discussions was an attempt to impose a 10-year moratorium on state AI regulations. Initially introduced as part of President Trump's One Big Beautiful Bill budget reconciliation package, the provision would have prevented states from enforcing their own AI laws for a decade.

The moratorium, championed by Republican Sen. Ted Cruz of Texas, was designed to prevent what supporters called a "regulatory cacophony" of conflicting state policies. Proponents argued that navigating 50 different regulatory frameworks would stifle innovation, create compliance burdens particularly harmful to smaller companies, and potentially hamper America's competitive position against China in AI development.

Tech industry leaders, including OpenAI CEO Sam Altman, had expressed support for federal preemption, with Altman noting it would be "very difficult to imagine us figuring out how to comply with 50 different sets of regulation."

However, the proposal faced overwhelming bipartisan opposition from state officials. In a remarkable display of unity, a coalition of 17 Republican governors led by Arkansas Gov. Sarah Huckabee Sanders sent a letter to congressional leadership opposing the moratorium. The governors argued that "AI is already deeply entrenched in American industry and society; people will be at risk until basic rules ensuring safety and fairness can go into effect."

A bipartisan group of 40 state attorneys general also sent a letter to Congress voicing objections to the proposal as violative of state sovereignty and their efforts to protect consumers. After significant pushback, lawmakers attempted to modify the proposal by shortening the timeframe to five years and exempting certain categories of state laws. Despite these concessions, the revised proposal still faced criticism for containing language that would undermine state laws deemed to place an "undue or disproportionate burden" on AI systems.

Ultimately, in a decisive 99-1 Senate vote, the moratorium was stripped from the budget bill, with even Sen. Cruz joining the overwhelming majority. This outcome represented a significant victory for states' rights advocates but left unresolved the question of how to balance national interests in AI development with legitimate state concerns about protecting citizens.

Trump Administration Executive Order

On Dec. 11, 2025, President Trump signed an executive order aimed at blocking state AI laws, arguing that state-by-state regulation creates a burdensome patchwork that threatens American AI leadership. The order encourages the Attorney General to challenge what it characterizes as onerous and excessive state laws and calls for development of a national AI framework.

The administration specifically criticized laws like Colorado's anti-discrimination requirements, claiming they may force AI models to produce false results to avoid differential impact on protected groups. The executive order states that excessive state regulation thwarts innovation imperatives and that state-by-state regulation "by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups."

It also claims that state laws are increasingly responsible for requiring entities to embed ideological bias within models and that some state laws impermissibly regulate beyond state borders, impinging on interstate commerce. The order establishes an AI Litigation Task Force to coordinate federal challenges to state laws and directs the Secretary of Commerce to publish, by March 11, 2026, an evaluation identifying burdensome state AI laws that conflict with federal policy.

New York Assemblymember Alex Bores, sponsor of the RAISE Act, argues his bill does not fit the category of onerous regulation. He notes the RAISE Act was largely based on voluntary commitments that AI companies already made and pledged to follow, simply ensuring those rules stayed in law and that companies couldn't backslide. "I don't think it's onerous to require companies to do the things that they're already saying they're going to do," Bores said in a recent NPR interview. He also observed that the executive order encourages the attorney general to sue over these laws but noted that Trump appears to be sending a political message rather than one based on the content of the laws.

Several Democratic governors, including Colorado's Jared Polis, Connecticut's Ned Lamont, and New York's Kathy Hochul, have expressed concern about the challenges posed by varying state regulations. As Governor Lamont noted, "I just worry about every state going out and doing their own thing, a patchwork quilt of regulations," and the potential burden this creates for AI development.

Republican leaders have been equally vocal about the issue, though often divided on the approach. Republican Senator Marsha Blackburn of Tennessee has emphasized that states must retain their ability to protect citizens until comprehensive federal legislation is in place, stating, "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."

The NIST Standards Approach

A more promising federal pathway has emerged in recent congressional hearings, with growing bipartisan support for leveraging the National Institute of Standards and Technology (NIST) to develop technical standards for AI systems. This approach has proven successful in adjacent domains like cybersecurity and privacy, where the NIST Cybersecurity Framework has achieved widespread voluntary adoption across industries without imposing heavy-handed regulation.

NIST has already developed the AI Risk Management Framework (AI RMF 1.0), which provides a common vocabulary and methodology for identifying, assessing, and mitigating AI risks. The framework emphasizes a flexible, context-specific approach that can accommodate rapid technological changes while still establishing important guardrails.

In December 2025, NIST released a preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596), offering guidelines for using the NIST Cybersecurity Framework (CSF 2.0) to accelerate secure AI adoption. The profile helps entities think strategically about AI adoption while addressing emerging cybersecurity risks, with a comment period open through Jan. 30, 2026.

The NIST standards approach offers several advantages. It leverages multi-stakeholder input from industry, academia, civil society, and government. It creates technically sound, practical guidelines. And it balances innovation with protection. For companies already familiar with NIST frameworks for cybersecurity and privacy compliance, particularly in regulated sectors like healthcare, defense, and financial services, this approach provides continuity and integration with existing governance structures.

The Federalism Debate: National Strategy or Californication?

A growing debate has emerged in Congress around whether the United States needs a unified national approach to AI regulation. A House Judiciary Subcommittee hearing on Sept. 18, 2025, AI at a Crossroads: A Nationwide Strategy or Californication? examined how the current patchwork of state regulations might impact innovation and impose costs on the AI industry.

The hearing highlighted a pivotal moment for AI regulation in the United States. Without a unified national strategy, fragmented state laws risk hindering innovation and slowing economic progress. The contrasting approaches between state-level regulation in the United States and more unified frameworks adopted by other nations highlights the fundamental tension between fostering innovation and ensuring responsible AI development. The state-by-state approach, sometimes called "Californication" due to California's outsized influence, raises questions about whether companies will effectively be forced to comply with the strictest state standards nationwide.

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 addresses governance frameworks for high-impact AI systems, requiring risk assessments and management practices. This bipartisan effort signals that despite the state patchwork, there is movement at the federal level toward establishing baseline standards. Many industry stakeholders have implemented voluntary commitments and self-regulation frameworks that complement formal regulations. Leading tech firms have established internal review processes for high-risk AI applications, showing that governance can advance even without formal mandates.

As the 2026 legislative cycle begins, states are expected to revisit unfinished debates from 2025 while turning to new and fast-evolving issues. Several emerging topics will likely dominate policy discussions.

Agentic AI: Legislators are beginning to explore AI agents capable of autonomous planning and action, systems that move beyond generative AI's content creation toward more complex functionality. Early governance experiments include Virginia's regulatory reduction pilot and Delaware's agentic AI sandbox, but few bills directly address these agents. Existing risk frameworks may prove ill-suited for agentic AI, as harms are harder to trace across agents' multiple decision nodes.

Algorithmic Pricing: States are testing ways to regulate AI-driven pricing tools, with bills targeting discrimination, transparency, and competition. New York enacted disclosure requirements for personalized algorithmic pricing, while California, Colorado, and Minnesota have floated their own frameworks. In 2026, lawmakers may focus on more precise definitions or stronger disclosure measures.

Definitional Uncertainty: States continue to diverge in how they define artificial intelligence itself, as well as categories like frontier models, generative AI, and chatbots. These differences will become more consequential as more laws take effect, expanding the compliance landscape for multi-state operations.

International Approaches to AI Regulation

While U.S. states navigate their regulatory approaches, other nations have taken more coordinated action.

European Union: The AI Act

The EU has established the most comprehensive regulatory framework globally through its AI Act, which became legally binding on Aug. 1, 2024. The legislation takes a risk-based approach, categorizing AI systems into four tiers: unacceptable risk (prohibited), high risk (stringent requirements), limited risk (transparency obligations), and minimal risk (no additional obligations).

Key compliance milestones include Feb. 2, 2025, when prohibitions on certain AI practices took effect; Aug. 2, 2025, when rules for general-purpose AI models became applicable; Aug. 2, 2026, when the core framework becomes broadly operational for high-risk systems; and Aug. 2, 2027, when remaining provisions take full effect.

Penalties for noncompliance are substantial: up to EUR 35 million or 7% of worldwide annual turnover for prohibited practices, up to EUR 15 million or 3% for other violations, and up to EUR 7.5 million or 1% for supplying incorrect information to authorities.

For U.S. companies serving European customers, partnering with EU-based firms, or operating within the European market, understanding and preparing for compliance is essential. The Act's extraterritorial reach means American businesses must comply with EU requirements regardless of where they are headquartered.

For a detailed compliance guide including high-risk system requirements, ISO 42001 alignment, and practical steps for U.S. companies, see our EU AI Act Compliance Guide for U.S. Businesses.

Other International Approaches

United Kingdom: The UK has pursued a more flexible approach with its National AI Strategy, followed by a policy white paper titled "AI Regulation: A Pro-Innovation Approach." This framework is designed to be agile and iterative, recognizing the rapid evolution of AI technologies. Unlike the EU's comprehensive legislation, the UK emphasizes sector-specific regulation through existing regulators rather than creating new AI-specific laws.

Canada: Canada has implemented the Artificial Intelligence and Data Act (AIDA), which focuses on managing risks associated with high-impact AI systems while supporting innovation. The framework takes a similar risk-based approach to the EU but with lighter requirements for lower-risk systems.

China: China has taken a more assertive regulatory stance with requirements for security assessments and algorithm registrations, particularly for generative AI. The country requires AI service providers to register algorithms with the government and conduct security assessments before deployment.

Japan: Japan has pursued a less restrictive approach focused on voluntary guidelines and principles, emphasizing human-centric AI development. The government encourages industry self-regulation while monitoring developments for potential future intervention.

The contrast between the EU's comprehensive, prescriptive approach with substantial penalties and the U.S. state-by-state patchwork highlights fundamental differences in regulatory philosophy. The EU AI Act creates a unified framework across 27 member states with clear categories, timelines, and enforcement mechanisms, while U.S. states continue experimenting with different approaches tailored to local concerns and industries.

Common Regulatory Themes

All state laws emphasize transparency requirements, whether focused on customer interactions in Utah, comprehensive documentation in Colorado, or content labeling in California. Risk-based frameworks are emerging as the dominant approach, with Colorado leading by imposing stricter requirements on high-risk AI systems while allowing lighter regulation for lower-risk applications. Consumer protection drives most legislation, reflecting concerns about algorithmic discrimination, privacy violations, and deceptive practices.

State Leadership

California continued its AI regulatory leadership in the 2025 session, enacting seven new AI laws including landmark frontier AI transparency and companion chatbot legislation. Other active states included Texas (8 AI measures), Montana (6), Utah (5), and Arkansas (5). Across the year, more than 70 AI-related laws passed in at least 27 states, illustrating the rapid evolution of state-level AI governance. States such as Nevada, Montana, North Dakota, and Texas that were less active on AI legislation in 2024 became key players in 2025-2026 AI policy debates.

What This Means for Michigan Businesses

The fragmented regulatory environment creates significant challenges for firms operating across multiple states. Varying definitions of high-risk systems, different disclosure requirements, and inconsistent enforcement mechanisms complicate compliance. Companies developing or deploying AI systems need adaptable frameworks that can accommodate different state standards while anticipating potential federal preemption.

For Michigan-based companies, particularly those in defense contracting and healthcare sectors, several immediate concerns warrant attention:

Colorado's requirements apply if you operate in that state or serve Colorado customers. High-risk AI systems making consequential decisions about employment, housing, healthcare, or financial services require impact assessments and documentation. The June 30, 2026, effective date gives businesses time to prepare compliance programs.

California's transparency mandates apply if you do business in California or have AI systems accessible to California residents with significant user bases. If your AI-generated content reaches California users, disclosure requirements may apply. AB 2013's training data transparency requirements took effect Jan. 1, 2026, while SB 942's detection tool requirements take effect Aug. 2, 2026.

Michigan's own regulations focus primarily on election integrity and deepfake protections through HB 5141, HB 5143, HB 5144, and HB 5145 (political deepfakes) and HB 4047 and HB 4048 (intimate deepfakes). The state's Civil Rights Commission has signaled interest in broader algorithmic discrimination protections similar to Colorado's approach.

CMMC compliance intersections exist with AI governance. Defense contractors already familiar with NIST frameworks will find the NIST AI Risk Management Framework provides natural integration with existing cybersecurity and compliance programs.

EU AI Act considerations apply if you serve European customers, have EU operations, or partner with NATO allies. Defense contractors and healthcare providers with international reach should assess whether their AI systems fall under EU jurisdiction. See our EU AI Act Compliance Guide for detailed requirements and preparation steps.

Whether federal courts will allow the Trump administration to preempt state laws remains uncertain and will likely be determined through litigation in 2026 and beyond.

Governance Requires Balance

As the debate continues, effective AI governance requires balancing innovation with accountability, providing appropriate protections without stifling technological progress. With state legislatures scheduled to reconvene in early 2026 and hundreds more bills expected, the AI regulatory landscape will continue to evolve rapidly in the coming years.

Companies should monitor developments in states where they operate, assess their AI systems against emerging risk frameworks, implement transparency and disclosure practices that meet or exceed current requirements, and prepare for potential federal action that could either preempt state laws or establish baseline national standards.

The tension between state innovation in regulation and federal interest in uniformity will likely define AI governance debates throughout 2026. Until comprehensive federal legislation emerges, states will continue serving as laboratories of democracy, testing different approaches to managing AI's risks while fostering its benefits.

International Considerations for U.S. Businesses

For companies operating internationally or serving customers in multiple jurisdictions, the EU AI Act represents a critical compliance consideration. The Act's extraterritorial reach means that U.S. companies offering AI products or services to EU customers must comply with EU requirements, regardless of where the company is based. This is particularly relevant for defense contractors with NATO partners, health care providers serving international patients, and technology companies with European operations.

The EU's phased implementation timeline, with major milestones in 2025, 2026, and 2027, means businesses should begin compliance planning now rather than waiting for U.S. federal action. Companies already familiar with GDPR compliance will find some conceptual overlap, though the AI Act introduces entirely new categories of requirements around risk management, transparency, and prohibited uses.

For detailed guidance on EU AI Act compliance, including high-risk system requirements, ISO 42001 alignment, and practical steps for U.S. companies, see our EU AI Act Compliance Guide for U.S. Businesses.

For assistance navigating AI compliance requirements and developing governance frameworks aligned with both state regulations and cybersecurity best practices like NIST and CMMC, contact STACK Cybersecurity.

Cybersecurity Consultation

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cybersecurity's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Explore our Risk Assessment