Back to Posts

Comprehensive List of State AI Laws

Sept. 21, 2025

Michigan Capitol Rotunda

Last Updated: December 15, 2025

Editor's Note: This post has been updated to reflect the Trump administration's December 2025 executive order attempting to block state AI regulations and the latest state legislative developments through December 2025.

Across the United States, state legislatures have moved decisively to fill the regulatory void left by limited federal action on artificial intelligence (AI). With over 1,080 AI-related bills introduced in 2024-2025 and 186 laws enacted, these state-level initiatives reveal several distinct approaches to AI governance, with regulations ranging from comprehensive to narrowly targeted.

Risks Driving Regulatory Urgency

The regulatory landscape has been influenced by growing concerns about AI's potential long-term risks. In May 2023, over 350 AI executives, researchers, and engineers signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Signers included leaders from OpenAI, Google DeepMind, and Anthropic. This unprecedented warning from the very architects of advanced AI systems has added urgency to regulatory discussions.

A 2024 report commissioned by the U.S. State Department concluded advanced AI systems could, in a worst-case scenario, "pose an extinction-level threat to the human species," based on interviews with executives from leading AI companies, cybersecurity researchers, and national security officials. These high-profile warnings have accelerated debate about appropriate governance frameworks to address both near-term harm and long-term safety concerns.

California: Leading with Comprehensive Regulation

California remains the most active state in AI regulation, passing 17 AI-related bills in 2024 and 13 more in 2025. While Gov. Gavin Newsom vetoed the comprehensive SB 1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) in September 2024 despite its passage through both chambers of state legislature, California has pursued targeted regulations across multiple domains.

Transparency Requirements

The California AI Transparency Act (SB 942), effective Jan. 1, 2026, mandates AI systems publicly accessible within California with more than one million monthly visitors implement comprehensive measures to disclose when content has been generated or modified by AI. This Act establishes requirements for AI detection tools and content disclosures, with penalties of $5,000 per violation per day for non-compliance.

California enacted SB 53 (Transparency in Frontier Artificial Intelligence Act) in September 2025, targeting large frontier developers with annual revenue exceeding $500 million. The law requires disclosure of risk management protocols and transparency reports about frontier models, with reporting requirements for critical safety incidents and whistleblower protections for employees who report safety concerns. The law takes effect January 1, 2026.

Companion Chatbot Regulation

The state also passed SB 243 regulating companion chatbots, requiring operators to disclose when users are interacting with AI rather than humans and maintain protocols to prevent production of self-harm content. The law applies to AI systems "capable of meeting a user's social needs" and requires that reasonable persons would not be misled into believing they're interacting with a human. California provides an exception for chatbots used only for customer service.

Deepfakes and Explicit Content

California has approved numerous regulations addressing deepfakes and explicit content. SB 926 criminalizes the creation or distribution of AI-generated sexually explicit images with intent to cause serious emotional distress. SB 981 requires social media platforms to establish reporting mechanisms for deepfake nudes, with requirements to temporarily block such content during investigation and permanently remove it if confirmed. AB 1831 expands child pornography laws to include AI-generated content. AB 1836 protects digital replicas of deceased performers from unauthorized AI reproduction.

Election Integrity

For election integrity, AB 2655 (Defending Democracy from Deepfake Deception Act) requires large online platforms to block or label deceptive AI-generated content related to elections, while AB 2839 prohibits distribution of materially deceptive election content. However, a federal judge blocked AB 2839 in 2024 on First Amendment grounds, leading most other states to adopt disclosure requirements rather than outright prohibitions. AB 2355 mandates that political advertisements using AI-generated content include clear disclosures.

Colorado: Risk-Based Framework for Algorithmic Discrimination

Colorado has been at the forefront of AI regulation with its landmark Colorado Anti-Discrimination in AI Law (SB 24-205), enacted May 17, 2024. This comprehensive framework focuses on protecting consumers from algorithmic discrimination in high-risk AI systems that make consequential decisions affecting employment, housing, education, healthcare, financial services, government services, insurance, and legal services.

The law imposes a duty of reasonable care on both developers and deployers, requiring steps to protect against discrimination based on protected characteristics. Developers must provide documentation about data sources, limitations, and risk mitigation strategies, while deployers must conduct impact assessments, provide notice to consumers, and establish appeal processes for adverse decisions.

Colorado makes a strong push for use of the National Institute of Standards and Technology (NIST) AI Risk Management Framework for governance. A violation constitutes a violation of Colorado's Unfair and Deceptive Trade Practices Act.

The effective date has been postponed multiple times to allow for further refinement. In August 2025, Governor Jared Polis signed SB 25B-004, delaying implementation from February 1, 2026, to June 30, 2026. Previously, Colorado passed SB 21-169 in 2021, addressing AI use in insurance underwriting and prohibiting insurers from using external consumer data and algorithms in ways that unfairly discriminate based on protected characteristics.

Tennessee: The ELVIS Act Protects Voice and Likeness

Tennessee made history on March 21, 2024, becoming the first state to enact comprehensive legislation protecting musicians and individuals from unauthorized AI voice cloning. The Ensuring Likeness Voice and Image Security (ELVIS) Act, effective July 1, 2024, expands the state's existing right of publicity law to explicitly include voice protection against AI-generated replicas.

The law defines "voice" broadly as "a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice." The ELVIS Act creates both civil and criminal penalties, with violations constituting a Class A misdemeanor punishable by up to one year in jail and fines up to $2,500.

Notably, the Act targets not only those who create unauthorized voice replicas but also technology providers, creating liability for anyone who "distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology" whose primary purpose is creating unauthorized voice or likeness replicas. This provision potentially subjects AI platform providers to liability, marking a significant expansion in accountability for technology companies.

Utah: First Comprehensive Consumer Protection AI Law

Utah became the first U.S. state to enact major AI consumer protection legislation when Governor Spencer Cox signed SB 149 (the AI Policy Act) on March 13, 2024, taking effect May 1, 2024. The law requires entities using generative AI to interact with consumers in commercial activities to provide clear and conspicuous disclosure.

In 2025, Utah significantly narrowed the law's scope through SB 226, which amended requirements to apply only when directly asked by consumers or during high-risk interactions involving health, financial, or biometric data collection. This represents a safe harbor for entities that disclose AI use at the outset and throughout interactions. For regulated occupations, individuals providing services must disclose generative AI use only in high-risk interactions.

SB 332 extended the law's effectiveness until July 1, 2027 (originally set to expire in 2025). Additionally, HB 452 introduced specific regulations for AI-supported mental health chatbots, including bans on advertising products during user interactions and prohibitions on sharing users' personal information. The law requires providers to make clear and conspicuous disclosures to users at several points in time, including prior to initially accessing the chatbot, when a user revisits the chatbot after not using it for more than seven days, and when asked by the user.

Texas: Responsible AI Governance

Texas enacted the Responsible Artificial Intelligence Governance Act (TRAIGA) through HB 149, which takes effect January 1, 2026. The law prohibits intentionally developing or deploying AI systems to incite or encourage harm to self or others, engage in criminal activity, infringe on constitutional rights, engage in unlawful discrimination against protected classes in violation of state or federal law, or produce deepfakes or child pornography.

The enacted version of TRAIGA replaced an earlier proposal that was similar to Colorado's AI Act. Texas also established an AI advisory council through SB 1893 and enacted HB 2060, requiring impact assessments for AI systems used in public services.

Employment and Healthcare AI Regulations

Illinois

Illinois has been an early mover in AI regulation, particularly in employment contexts. The Illinois Artificial Intelligence Video Interview Act (820 ILCS 42), effective since January 2020, requires employers who use AI to analyze video interviews to notify applicants before the interview that AI may be used, explain how the AI works and what characteristics it evaluates, obtain consent from applicants, limit sharing of videos, and delete videos upon request within 30 days.

In August 2024, Illinois enacted HB 3773, which amends the Illinois Human Rights Act to regulate AI in employment more broadly. Effective January 1, 2026, the law prohibits employers from using AI that has a discriminatory effect on employees based on protected characteristics and requires notice to employees when AI is used for recruitment, hiring, promotion, and other employment decisions.

Illinois has also acted on AI in health care, becoming the first state to ban the commercial use of AI therapy chatbots that could mislead users about the nature of mental health services being provided.

New York

New York City pioneered regulation of AI in hiring with its Automated Employment Decision Tools (AEDT) law (Local Law 144), which began enforcement July 5, 2023. The law requires employers and employment agencies using AI tools for hiring or promotion decisions to conduct an annual bias audit of the tool by an independent auditor, publish a summary of results on their website, notify job candidates and employees about AI use at least 10 business days before use, and disclose the job qualifications and characteristics being evaluated.

The law applies to computational processes derived from machine learning, statistical modeling, or data analytics that substantially assist or replace human decision-making in employment contexts.

The state legislature is considering the RAISE Act (AB 6453/SB 6953), which would establish transparency and risk safeguards for frontier AI models. The RAISE Act is in active negotiation with Gov. Kathy Hochul to reach a final version. As currently structured, the Act requires advanced AI developers to publish a safety plan they follow, disclose critical safety incidents to New York state government, and refrain from releasing models that fail their own tests. The Act passed the New York legislature, and Gov. Hochul has until Dec. 31, 2025, to negotiate with legislative leaders.

Washington

Washington's SB 5827, enacted in 2023, addresses algorithmic discrimination by prohibiting covered entities from discriminating against individuals through automated decision systems based on protected characteristics. The law requires reasonable efforts to test automated systems for algorithmic discrimination and establishes frameworks for transparency and accountability.

Michigan's Comprehensive Approach

Michigan has taken comprehensive action on AI regulation, particularly focusing on election integrity and protection from deepfakes. The state enacted a four-bill package (HB 5141, HB 5143, HB 5144, and HB 5145) in November 2023, effective Feb. 13, 2024, requiring disclaimers on AI-generated political ads and prohibiting deepfake political content within 90 days of an election unless clearly disclosed. The Michigan Campaign Finance Act (Section 169.259) now requires that any qualified political advertisement created using AI must include a clear statement about its AI-generated nature.

Michigan also passed HB 4047 and HB 4048, signed by Gov. Whitmer in August 2025, which criminalize the creation and distribution of nonconsensual intimate AI deepfakes, with enhanced penalties for cases involving extortion, harassment, or profit motives. The laws allow victims to take civil action and establish both criminal penalties and court-ordered restraining orders to prevent further harm.

In October 2024, the Michigan Civil Rights Commission passed a resolution establishing guiding principles for AI use in the state, calling for legislation to prevent algorithmic discrimination, protect privacy, and create a task force to monitor data collection practices.

Additional State Initiatives

Arkansas

Arkansas enacted multiple AI regulations in 2025. HB 1071 amends the state's Publicity Rights Protection Act to explicitly include AI-generated images and voice, originally created to strengthen publicity rights for student athletes. HB 1876 establishes ownership rights over content created by generative AI, clarifying that users who provide input to AI tools own the resulting content, provided it doesn't infringe on existing copyrights. HB 1958 requires public entities to develop comprehensive policies regarding the authorized use of AI and automated decision-making technology.

Montana

Montana passed SB 212 in 2025, establishing a "right to compute" that limits government restrictions on private ownership or use of computational resources. The law requires that any restrictions be narrowly tailored to fulfill a compelling government interest. It also mandates that critical infrastructure facilities controlled by AI systems develop risk management policies based on national or international AI risk management frameworks. Montana is one of four states (alongside Arkansas, Pennsylvania, and Utah) that passed digital replica laws in 2025 to protect digital identity and consent.

Other States

Pennsylvania enacted digital replica protections in 2025, joining Arkansas, Montana, and Utah in safeguarding individuals' digital likenesses from unauthorized AI reproduction. Kentucky enacted SB 4, directing the Commonwealth Office of Technology to create policy standards governing AI use.

Maryland passed HB 956 to form a group studying private sector AI and providing legislative recommendations. West Virginia passed HB 3187 to create a task force identifying AI opportunities and best practices for public sector use.

Maine passed the Chatbot Disclosure Act in June 2025, which requires businesses that use AI chatbots to communicate with consumers to notify those consumers they aren’t interacting with a human in cases where a reasonable consumer couldn’t tell the difference. Maine's law became effective Sept. 24, 2025, and is enforceable under the Maine Unfair Trade Practices Act.

Kansas and Oregon have prohibited the use of foreign-owned AI systems (including DeepSeek) on state computers, with Oregon also prohibiting AI systems from posing as licensed medical professionals.

Vermont's S.197 created an AI commission for policy development, and Act 89 addresses AI in insurance underwriting.

Virginia established an AI advisory council through HB 2360 and incorporated AI protections into its Consumer Data Protection Act.

Connecticut enacted SB 1103 regulating insurance AI use and SB 2 for generative AI in education.

Massachusetts passed H.5163 requiring hiring AI disclosures.

AI-Generated Child Sexual Abuse Material

One of the most widespread areas of AI regulation across states involves AI-generated or computer-edited child sexual abuse material (CSAM).  As of 2025, 45 states have enacted laws criminalizing AI-generated CSAM, with many of these laws passed in 2024-2025 alone. The National Center for Missing and Exploited Children reported receiving 67,000 reports of AI-generated CSAM in all of 2024, and 440,000 in just the first half of 2025, representing a substantial increase reflecting the exponential growth of this threat.

States with AI CSAM laws include Michigan, Alabama, Arizona, Arkansas, California, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.

Only five states and Washington, D.C., have not yet criminalized AI-generated CSAM: Alaska, Colorado, Massachusetts, Ohio, Vermont, and the District of Columbia.

Political Deepfakes and Election Integrity

As of 2025, 28 states have enacted laws specifically addressing deepfakes used in political communications. These laws generally fall into two categories: disclosure requirements and outright prohibitions. Most states have opted for disclosure requirements due to First Amendment concerns, after a federal judge blocked California's prohibition law (AB 2839) in 2024 on constitutional grounds.

In 2025 alone, 301 deepfake-related bills were introduced across states, with 68 enacted, primarily addressing sexual deepfakes through criminal or civil penalties. This represents one of the most active areas of AI legislation at the state level.

The Federal Moratorium Debate

A particularly contentious proposal that dominated recent AI policy discussions was an attempt to impose a 10-year moratorium on state AI regulations. Initially introduced as part of President Trump's One Big Beautiful Bill budget reconciliation package, the provision would have prevented states from enforcing their own AI laws for a decade.

The moratorium, championed by Republican Senator Ted Cruz of Texas, was designed to prevent what supporters called a "regulatory cacophony" of conflicting state policies. Proponents argued that navigating 50 different regulatory frameworks would stifle innovation, create compliance burdens particularly harmful to smaller companies, and potentially hamper America's competitive position against China in AI development.

Tech industry leaders, including OpenAI CEO Sam Altman, had expressed support for federal preemption, with Altman noting it would be "very difficult to imagine us figuring out how to comply with 50 different sets of regulation."

However, the proposal faced overwhelming bipartisan opposition from state officials. In a remarkable display of unity, a coalition of 17 Republican governors led by Arkansas Governor Sarah Huckabee Sanders sent a letter to congressional leadership opposing the moratorium. The governors argued that "AI is already deeply entrenched in American industry and society; people will be at risk until basic rules ensuring safety and fairness can go into effect."

A bipartisan group of 40 state attorneys general also sent a letter to Congress voicing objections to the proposal as violative of state sovereignty and their efforts to protect consumers. After significant pushback, lawmakers attempted to modify the proposal by shortening the timeframe to five years and exempting certain categories of state laws. Despite these concessions, the revised proposal still faced criticism for containing language that would undermine state laws deemed to place an "undue or disproportionate burden" on AI systems.

Ultimately, in a decisive 99-1 Senate vote, the moratorium was stripped from the budget bill, with even Sen. Cruz joining the overwhelming majority. This outcome represented a significant victory for states' rights advocates but left unresolved the question of how to balance national interests in AI development with legitimate state concerns about protecting citizens.

Trump Administration's December 2025 Executive Order

On December 11, 2025, President Trump signed an executive order aimed at blocking state AI laws, arguing that state-by-state regulation creates a burdensome patchwork that threatens American AI leadership. The order encourages the Attorney General to challenge what it characterizes as onerous and excessive state laws and calls for development of a national AI framework.

The administration specifically criticized laws like Colorado's anti-discrimination requirements, claiming they may force AI models to produce false results to avoid differential impact on protected groups. The executive order states that excessive state regulation thwarts innovation imperatives and that state-by-state regulation "by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups."

It also claims that state laws are increasingly responsible for requiring entities to embed ideological bias within models and that some state laws impermissibly regulate beyond state borders, impinging on interstate commerce. The order establishes an AI Litigation Task Force to coordinate federal challenges to state laws.

New York Assemblymember Alex Bores, sponsor of the RAISE Act, argues his bill does not fit the category of onerous regulation. He notes the RAISE Act was largely based on voluntary commitments that AI companies already made and pledged to follow, simply ensuring those rules stayed in law and that companies couldn't backslide. "I don't think it's onerous to require companies to do the things that they're already saying they're going to do," Bores said in a recent NPR interview. He also observed that the executive order encourages the attorney general to sue over these laws but noted that Trump appears to be sending a political message rather than one based on the content of the laws.

Several Democratic governors, including Colorado's Jared Polis, Connecticut's Ned Lamont, and New York's Kathy Hochul, have expressed concern about the challenges posed by varying state regulations. As Governor Lamont noted, "I just worry about every state going out and doing their own thing, a patchwork quilt of regulations," and the potential burden this creates for AI development.

Republican leaders have been equally vocal about the issue, though often divided on the approach. Republican Senator Marsha Blackburn of Tennessee has emphasized that states must retain their ability to protect citizens until comprehensive federal legislation is in place, stating, "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."

The NIST Standards Approach

A more promising federal pathway has emerged in recent congressional hearings, with growing bipartisan support for leveraging the National Institute of Standards and Technology (NIST) to develop technical standards for AI systems. This approach has proven successful in adjacent domains like cybersecurity and privacy, where the NIST Cybersecurity Framework has achieved widespread voluntary adoption across industries without imposing heavy-handed regulation.

NIST has already developed the AI Risk Management Framework (AI RMF 1.0), which provides a common vocabulary and methodology for identifying, assessing, and mitigating AI risks. The framework emphasizes a flexible, context-specific approach that can accommodate rapid technological changes while still establishing important guardrails.

The NIST standards approach offers several advantages. It leverages multi-stakeholder input from industry, academia, civil society, and government. It creates technically sound, practical guidelines. And it balances innovation with protection. For companies already familiar with NIST frameworks for cybersecurity and privacy compliance, particularly in regulated sectors like healthcare, defense, and financial services, this approach provides continuity and integration with existing governance structures.

The Federalism Debate: National Strategy or Californication?

A growing debate has emerged in Congress around whether the United States needs a unified national approach to AI regulation. A House Judiciary Subcommittee hearing on September 18, 2025, titled "AI at a Crossroads: A Nationwide Strategy or Californication?" examined how the current patchwork of state regulations might impact innovation and impose costs on the AI industry.

The hearing highlighted a pivotal moment for AI regulation in the United States. Without a unified national strategy, fragmented state laws risk hindering innovation and slowing economic progress. The contrasting approaches between state-level regulation in the United States and more unified frameworks adopted by other nations highlights the fundamental tension between fostering innovation and ensuring responsible AI development. The state-by-state approach, sometimes called "Californication" due to California's outsized influence, raises questions about whether companies will effectively be forced to comply with the strictest state standards nationwide.

The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 addresses governance frameworks for high-impact AI systems, requiring risk assessments and management practices. This bipartisan effort signals that despite the state patchwork, there is movement at the federal level toward establishing baseline standards. Many industry stakeholders have implemented voluntary commitments and self-regulation frameworks that complement formal regulations. Leading tech firms have established internal review processes for high-risk AI applications, showing that governance can advance even without formal mandates.

International Approaches to AI Regulation

While U.S. states navigate their regulatory approaches, other nations have taken more coordinated action.

European Union: The AI Act

The EU has established the most comprehensive regulatory framework globally through its AI Act, which became legally binding on August 1, 2024. The legislation takes a risk-based approach, categorizing AI products and uses into four distinct risk categories with corresponding requirements and prohibitions.

Unacceptable Risk

The EU AI Act bans several AI applications outright, including cognitive behavioral manipulation of people or specific vulnerable groups (such as voice-activated toys that could encourage dangerous behavior in children), social scoring AI classifying people based on behavior or socio-economic status or personal characteristics, biometric identification and categorization of people, and real-time and remote biometric identification systems such as facial recognition in public spaces.

High Risk

This category considers AI systems that negatively impact safety or fundamental rights. It breaks into two subcategories: AI systems used in products defined by the EU's product safety legislation (toys, aviation, medical devices, and lifts) and AI systems that fall into specific areas requiring registration in an EU database (education, law enforcement, critical infrastructure, and related areas).

Limited Risk

Some AI products and use cases fall into this category and are subject to transparency requirements. Systems like ChatGPT must disclose that content was generated by AI, design the model to prevent it from generating illegal content, and publish summaries of copyrighted data used for training.

Minimal or No Risk

Most AI systems fall into this category, meaning they have no further legal obligations under the Act.

Implementation Timeline

The EU AI Act requirements take effect gradually over time with a phased rollout. Key milestones include February 2, 2025, when prohibitions on certain AI systems and requirements on AI literacy start to apply; August 2, 2025, when rules start to apply for notified bodies, general-purpose AI models, governance, confidentiality and penalties; August 2, 2026, when the remainder of the AI Act starts to apply except for some high-risk AI systems with specific qualifications; and August 2, 2027, when all systems without exception must meet obligations of the AI Act.

Penalties for Noncompliance

The EU AI Act establishes substantial fines for violations. Noncompliance with the prohibition of AI practices referred to in Article 5 will be subject to administrative fines of up to EUR 35,000,000 or up to 7 percent of worldwide annual turnover, whichever is higher. Noncompliance with any other provisions not laid out in Article 5 will be subject to fines up to EUR 15,000,000 or up to 3 percent of worldwide annual turnover, whichever is higher. The Act also sets fines for those who supply incomplete, incorrect, or misleading information to notified bodies or national competent authorities when they request information, with fines up to EUR 7,500,000 or up to 1 percent of worldwide annual turnover, whichever is higher.

ISO 42001 and Compliance

The EU AI Act emphasizes the importance of ongoing governance frameworks for AI risk management, transparency, and compliance. ISO 42001 has emerged as essential for EU AI Act compliance, establishing a systematic, repeatable process for AI compliance and providing an adaptable framework that evolves alongside regulatory requirements. While not an approved harmonized standard for AI Act conformity, it provides the foundation companies need to be successful when the final quality management system conformity standard is released.

United Kingdom: The UK has pursued a more flexible approach with its National AI Strategy, followed by a policy white paper titled "AI Regulation: A Pro-Innovation Approach." This framework is designed to be agile and iterative, recognizing the rapid evolution of AI technologies. Unlike the EU's comprehensive legislation, the UK emphasizes sector-specific regulation through existing regulators rather than creating new AI-specific laws.

Canada: Canada has implemented the Artificial Intelligence and Data Act (AIDA), which focuses on managing risks associated with high-impact AI systems while supporting innovation. The framework takes a similar risk-based approach to the EU but with lighter requirements for lower-risk systems.

China: China has taken a more assertive regulatory stance with requirements for security assessments and algorithm registrations, particularly for generative AI. The country requires AI service providers to register algorithms with the government and conduct security assessments before deployment.

Japan: Japan has pursued a less restrictive approach focused on voluntary guidelines and principles, emphasizing human-centric AI development. The government encourages industry self-regulation while monitoring developments for potential future intervention.

The contrast between the EU's comprehensive, prescriptive approach with substantial penalties and the U.S. state-by-state patchwork highlights fundamental differences in regulatory philosophy. The EU AI Act creates a unified framework across 27 member states with clear categories, timelines, and enforcement mechanisms, while U.S. states continue experimenting with different approaches tailored to local concerns and industries.

Key Statistics and Trends

The scale and pace of AI legislation at the state level has been unprecedented:

  • Over 1,080 AI-related bills were introduced across states in 2024-2025
  • 113 AI laws were enacted in 2024 alone
  • 73 new AI laws were passed in 2025 across 27 states
  • 46 states have enacted laws addressing intimate or sexual deepfakes
  • 45 states have criminalized AI-generated CSAM
  • 28 states have political deepfake regulations
  • 34 states are actively studying AI through task forces or commissions

Common Regulatory Themes

All state laws emphasize transparency requirements, whether focused on customer interactions in Utah, comprehensive documentation in Colorado, or content labeling in California. Risk-based frameworks are emerging as the dominant approach, with Colorado leading by imposing stricter requirements on high-risk AI systems while allowing lighter regulation for lower-risk applications. Consumer protection drives most legislation, reflecting concerns about algorithmic discrimination, privacy violations, and deceptive practices.

State Leadership

With California leading in total AI laws enacted (13 in 2025 alone), followed by Texas (8), Montana (6), Utah (5), and Arkansas (5), the regulatory landscape continues to evolve rapidly. Four states (Nevada, Montana, North Dakota, and Texas) which were not in legislative session in 2024 are expected to consider significant AI legislation in 2025-2026.

What This Means for Michigan Businesses

The fragmented regulatory environment creates significant challenges for firms operating across multiple states. Varying definitions of high-risk systems, different disclosure requirements, and inconsistent enforcement mechanisms complicate compliance. Companies developing or deploying AI systems need adaptable frameworks that can accommodate different state standards while anticipating potential federal preemption.

For Michigan-based companies, particularly those in defense contracting and healthcare sectors, several immediate concerns warrant attention:

Colorado's requirements apply if you operate in that state or serve Colorado customers. High-risk AI systems making consequential decisions about employment, housing, healthcare, or financial services require impact assessments and documentation.

California's transparency mandates apply if you do business in California or have AI systems accessible to California residents with significant user bases. If your AI-generated content reaches California users, disclosure requirements may apply.

Michigan's own regulations focus primarily on election integrity and deepfake protections through HB 5141, HB 5143, HB 5144, and HB 5145 (political deepfakes) and HB 4047 and HB 4048 (intimate deepfakes). The state's Civil Rights Commission has signaled interest in broader algorithmic discrimination protections similar to Colorado's approach.

CMMC compliance intersections exist with AI governance. Defense contractors already familiar with NIST frameworks will find the NIST AI Risk Management Framework provides natural integration with existing cybersecurity and compliance programs.

EU AI Act considerations apply if you serve European customers, have EU operations, or partner with NATO allies. Defense contractors and healthcare providers with international reach should assess whether their AI systems fall under EU jurisdiction. The Act's risk-based categories and August 2025 compliance milestones for certain provisions mean planning should begin now.

Whether federal courts will allow the Trump administration to preempt state laws remains uncertain and will likely be determined through litigation in 2026 and beyond.

The Path Forward

As the debate continues, effective AI governance requires balancing innovation with accountability, providing appropriate protections without stifling technological progress. With state legislatures scheduled to reconvene in early 2026 and hundreds more bills expected, the AI regulatory landscape will continue to evolve rapidly in the coming years.

Companies should monitor developments in states where they operate, assess their AI systems against emerging risk frameworks, implement transparency and disclosure practices that meet or exceed current requirements, and prepare for potential federal action that could either preempt state laws or establish baseline national standards.

The tension between state innovation in regulation and federal interest in uniformity will likely define AI governance debates throughout 2026. Until comprehensive federal legislation emerges, states will continue serving as laboratories of democracy, testing different approaches to managing AI's risks while fostering its benefits.

International Considerations for U.S. Businesses

For companies operating internationally or serving customers in multiple jurisdictions, the EU AI Act represents a critical compliance consideration. The Act's extraterritorial reach means that U.S. companies offering AI products or services to EU customers must comply with EU requirements, regardless of where the company is based. This is particularly relevant for defense contractors with NATO partners, healthcare providers serving international patients, and technology companies with European operations.

The EU's phased implementation timeline, with major milestones in 2025, 2026, and 2027, means businesses should begin compliance planning now rather than waiting for U.S. federal action. Companies already familiar with GDPR compliance will find some conceptual overlap, though the AI Act introduces entirely new categories of requirements around risk management, transparency, and prohibited uses.

For assistance navigating AI compliance requirements and developing governance frameworks aligned with both state regulations and cybersecurity best practices like NIST and CMMC, contact STACK Cybersecurity.

Cybersecurity Consultation

Do you know if your company is secure against cyber threats? Do you have the right security policies, tools, and practices in place to protect your data, reputation, and productivity? If you're not sure, it's time for a cybersecurity risk assessment (CSRA). STACK Cyber's CSRA will meticulously identify and evaluate vulnerabilities and risks within your IT environment. We'll assess your network, systems, applications, and devices, and provide you a detailed report and action plan to improve your security posture. Don't wait until it's too late.

Schedule a Consultation Explore our Risk Assessment