Comprehensive List of State AI Laws
Sept. 21, 2025
Across the United States, state legislatures have moved decisively to fill the regulatory void left by limited federal action on artificial intelligence. With over 1,080 AI-related bills introduced in 2024-2025 and 186 laws enacted, these state-level initiatives reveal several distinct approaches to AI governance, with frameworks that range from comprehensive to narrowly targeted regulations.
Existential Risk Concerns
The regulatory landscape has been influenced by growing concerns about AI's potential long-term risks. In May 2023, over 350 AI executives, researchers, and engineers signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Signers included leaders from OpenAI, Google DeepMind, and Anthropic. This unprecedented warning from the very architects of advanced AI systems has added urgency to regulatory discussions at both state and federal levels.
More recently, a 2024 report commissioned by the U.S. State Department concluded that advanced AI systems could, in a worst-case scenario, "pose an extinction-level threat to the human species," based on interviews with executives from leading AI companies, cybersecurity researchers, and national security officials. These high-profile warnings have accelerated debate about appropriate governance frameworks to address both near-term harms and long-term safety concerns.
State Leadership in AI Regulation
California
California remains a major influencer in AI regulation, passing 17 AI-related bills in 2024 that Governor Newsom signed into law. While the most comprehensive bill, SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), was vetoed by Governor Newsom in September 2024 despite passing both chambers of the state legislature, California has taken a targeted approach focusing on specific areas of AI regulation.
The California AI Transparency Act (SB 942), effective January 1, 2026, mandates that AI systems publicly accessible within California with more than one million monthly visitors implement comprehensive measures to disclose when content has been generated or modified by AI. This Act establishes requirements for AI detection tools and content disclosures, with penalties of $5,000 per violation per day for non-compliance.
California has approved numerous regulations addressing AI-generated deepfakes and explicit content. SB 926 criminalizes the creation or distribution of AI-generated sexually explicit images with the intent to cause serious emotional distress. SB 981 requires social media platforms to establish reporting mechanisms for users to flag deepfake nudes, with requirements to temporarily block such content during investigation and permanently remove it if confirmed. AB 1831 expands existing child pornography laws to include content generated by AI systems. AB 1836 addresses digital replicas of deceased performers, protecting their likeness from unauthorized AI reproduction.
For election integrity, AB 2655 (the Defending Democracy from Deepfake Deception Act) requires large online platforms to block or label deceptive AI-generated content related to elections, while AB 2839 prohibits the distribution of materially deceptive election content. AB 2355 mandates that political advertisements using AI-generated content include clear disclosures.
Colorado
Colorado has been at the forefront of AI regulation with its landmark Colorado Anti-Discrimination in AI Law (SB 24-205), enacted on May 17, 2024. This comprehensive law focuses on protecting consumers from algorithmic discrimination in high-risk AI systems that make consequential decisions affecting areas such as employment, housing, education, and healthcare. The law's effective date has been postponed from February 1, 2026 to June 30, 2026 to allow for further refinement.
The law imposes a duty of reasonable care on both developers and deployers of high-risk AI systems, requiring them to take steps to protect against algorithmic discrimination based on protected characteristics. Developers must provide documentation about data sources, limitations, and risk mitigation strategies, while deployers must conduct impact assessments, provide notice to consumers, and establish appeal processes for adverse decisions.
Previously, Colorado passed SB 21-169 in 2021, which specifically addresses the use of AI in insurance underwriting, prohibiting insurers from using external consumer data and algorithms in ways that unfairly discriminate based on protected characteristics.
Tennessee - The ELVIS Act
Tennessee made history on March 21, 2024, by becoming the first state to enact comprehensive legislation protecting musicians and other individuals from unauthorized AI voice cloning. The Ensuring Likeness Voice and Image Security (ELVIS) Act, which took effect on July 1, 2024, expands the state's existing right of publicity law to explicitly include voice protection against AI-generated replicas.
The ELVIS Act defines "voice" broadly as "a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice." The law creates both civil and criminal penalties, with violations constituting a Class A misdemeanor punishable by up to one year in jail and fines up to $2,500.
Notably, the Act targets not only those who create unauthorized voice replicas but also technology providers, creating liability for anyone who "distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology" whose primary purpose is creating unauthorized voice or likeness replicas. This provision potentially subjects AI platform providers to liability, marking a significant expansion in accountability for AI technology companies.
Utah - First Comprehensive Consumer Protection AI Law
Utah became the first U.S. state to enact major AI consumer protection legislation when Governor Cox signed SB 149 (the AI Policy Act) on March 13, 2024, taking effect May 1, 2024. The law requires businesses using generative AI to interact with consumers in commercial activities to provide clear and conspicuous disclosure that they are interacting with AI rather than a human.
In 2025, Utah expanded its AI regulation through several amendments. SB 226 narrowed the disclosure requirements to apply only when directly asked by consumers or during "high-risk" interactions involving health, financial, or biometric data collection. SB 332 extended the law's effectiveness until July 1, 2027 (originally set to expire in 2025). Additionally, HB 452 introduced specific regulations for AI-supported mental health chatbots, including bans on advertising products during user interactions and prohibitions on sharing users' personal information.
Illinois
Illinois has been an early mover in AI regulation, particularly in employment contexts. The Illinois Artificial Intelligence Video Interview Act (820 ILCS 42), effective since January 2020, requires employers who use AI to analyze video interviews to: (1) notify applicants before the interview that AI may be used; (2) explain how the AI works and what characteristics it evaluates; (3) obtain consent from applicants; (4) limit sharing of videos; and (5) delete videos upon request within 30 days.
In August 2024, Illinois enacted HB 3773, which amends the Illinois Human Rights Act to regulate AI in employment more broadly. Effective January 1, 2026, the law prohibits employers from using AI that has a discriminatory effect on employees based on protected characteristics and requires notice to employees when AI is used for recruitment, hiring, promotion, and other employment decisions.
Illinois has also taken action on AI in healthcare, becoming the first state to ban the commercial use of AI therapy chatbots that could mislead users about the nature of mental health services being provided.
New York
New York City has pioneered regulation of AI in hiring with its Automated Employment Decision Tools (AEDT) law (Local Law 144), which began enforcement on July 5, 2023. The law requires employers and employment agencies using AI tools for hiring or promotion decisions to:
- Conduct an annual bias audit of the tool by an independent auditor
- Publish a summary of the results on their website
- Notify job candidates and employees about the use of AI tools at least 10 business days before use
- Disclose the job qualifications and characteristics being evaluated
The law applies to computational processes derived from machine learning, statistical modeling, or data analytics that substantially assist or replace human decision-making in employment contexts. The state legislature is also considering the RAISE Act (AB 6453/SB 6953), which would establish transparency and risk safeguards for frontier AI models.
Washington
Washington's SB 5827, enacted in 2023, addresses algorithmic discrimination by prohibiting covered entities from discriminating against individuals through automated decision systems on the basis of protected characteristics. The law requires reasonable efforts to test automated systems for algorithmic discrimination and establishes frameworks for transparency and accountability.
Emerging State AI Regulations
Michigan
Michigan has taken comprehensive action on AI regulation, particularly focusing on election integrity and protection from deepfakes. The state enacted HB 5440 and HB 5441, requiring disclaimers on AI-generated political ads and prohibiting deepfake political content within 90 days of an election unless clearly disclosed. The Michigan Campaign Finance Act (Section 169.259) now requires that any qualified political advertisement created using AI must include a clear statement about its AI-generated nature.
Michigan has also passed HB 5569 and HB 5570, which criminalize the creation and distribution of nonconsensual intimate AI deepfakes, with enhanced penalties for cases involving extortion, harassment, or profit motives. The state legislature is currently considering HB 4047 and HB 4048 (the Protection from Intimate Deep Fakes Act), which would establish both civil and criminal penalties for harmful deepfake creation.
In October 2024, the Michigan Civil Rights Commission passed a resolution establishing guiding principles for AI use in the state, calling for legislation to prevent algorithmic discrimination, protect privacy, and create a task force to monitor data collection practices.
Arkansas
Arkansas has enacted multiple AI regulations in 2025. HB 1071 amends the state's Publicity Rights Protection Act to explicitly include AI-generated images and voice, originally created to strengthen publicity rights for student-athletes. HB 1876 establishes ownership rights over content generated by generative AI, clarifying that individuals who provide input to AI tools own the resulting content, provided it doesn't infringe on existing copyrights. HB 1958 requires public entities to develop comprehensive policies regarding the authorized use of AI and automated decision-making technology.
Montana
Montana passed SB 212 in 2025, establishing a "right to compute" that limits government restrictions on private ownership or use of computational resources. The law requires that any restrictions be narrowly tailored to fulfill a compelling government interest. It also mandates that critical infrastructure facilities controlled by AI systems develop risk management policies based on national or international AI risk management frameworks. Montana is one of four states (alongside Arkansas, Pennsylvania, and Utah) that passed digital replica laws in 2025 to protect digital identity and consent.
Other State Initiatives
Several other states have taken significant steps toward AI regulation:
- Pennsylvania enacted digital replica protections in 2025, joining Arkansas, Montana, and Utah in safeguarding individuals' digital likenesses from unauthorized AI reproduction.
- Kentucky enacted SB 4, directing the Commonwealth Office of Technology to create policy standards governing AI use.
- Maryland enacted HB 956, establishing a working group to study private sector AI use and make recommendations to the General Assembly.
- West Virginia enacted HB 3187, creating a task force to identify AI-related economic opportunities and develop best practices for public sector AI use.
- Kansas and Oregon have prohibited the use of foreign-owned AI systems (including DeepSeek) on state computers, with Oregon also prohibiting AI systems from posing as licensed medical professionals.
- Texas has established an AI advisory council through SB 1893 and enacted HB 2060, requiring impact assessments for AI systems used in public services.
- Vermont's S.197 created an AI commission for policy development, and Act 89 addresses AI in insurance underwriting.
- Virginia established an AI advisory council through HB 2360 and incorporated AI protections into its Consumer Data Protection Act.
- Connecticut enacted SB 1103 regulating insurance AI use and SB 2 for generative AI in education.
- Massachusetts passed H.5163 requiring hiring AI disclosures.
AI-Generated Child Sexual Abuse Material Laws
One of the most widespread areas of AI regulation across states involves AI-generated or computer-edited child sexual abuse material (CSAM). As of 2025, 45 states have enacted laws criminalizing AI-generated CSAM, with many of these laws passed in 2024-2025 alone. The National Center for Missing and Exploited Children reported receiving 67,000 reports of AI-generated CSAM in all of 2024, and 485,000 in just the first half of 2025—a 624% increase.
States with AI CSAM laws include Michigan, Alabama, Arizona, Arkansas, California, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
Only five states and Washington D.C. have not yet criminalized AI-generated CSAM: Alaska, Colorado, Massachusetts, Ohio, Vermont, and the District of Columbia.
Political Deepfakes and Election Integrity
As of 2025, 28 states have enacted laws specifically addressing deepfakes used in political communications. These laws generally fall into two categories: disclosure requirements and outright prohibitions. Most states have opted for disclosure requirements due to First Amendment concerns, after a federal judge blocked California's prohibition law (AB 2839) in 2024 on constitutional grounds.
In 2025 alone, 301 deepfake-related bills were introduced across states, with 68 enacted—primarily addressing sexual deepfakes through criminal or civil penalties. This represents one of the most active areas of AI legislation at the state level.
Federal Efforts and Proposed Legislation
The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 addresses governance frameworks for high-impact AI systems, requiring risk assessments and management practices. This bipartisan effort signals that despite the state patchwork, there is movement at the federal level toward establishing baseline standards.
Many industry stakeholders have implemented voluntary commitments and self-regulation frameworks that complement formal regulations. Leading tech firms have established internal review processes for high-risk AI applications, showing that governance can advance even without formal mandates.
The Federal Moratorium Debate
A particularly contentious proposal that dominated recent AI policy discussions was an attempt to impose a 10-year moratorium on state AI regulations. Initially introduced as part of President Trump's One Big Beautiful Bill budget reconciliation package, the provision would have prevented states from enforcing their own AI laws for a decade.
The moratorium, championed by Republican Senator Ted Cruz of Texas, was designed to prevent what supporters called a "regulatory cacophony" of conflicting state policies. Proponents argued that navigating 50 different regulatory frameworks would stifle innovation, create compliance burdens particularly harmful to smaller companies, and potentially hamper America's competitive position against China in AI development.
Tech industry leaders, including OpenAI CEO Sam Altman, had expressed support for federal preemption, with Altman noting it would be "very difficult to imagine us figuring out how to comply with 50 different sets of regulation."
However, the proposal faced overwhelming bipartisan opposition from state officials. In a remarkable display of unity, a coalition of 17 Republican governors led by Arkansas Governor Sarah Huckabee Sanders sent a letter to congressional leadership opposing the moratorium. The governors argued that "AI is already deeply entrenched in American industry and society; people will be at risk until basic rules ensuring safety and fairness can go into effect."
After significant pushback, lawmakers attempted to modify the proposal by shortening the timeframe to five years and exempting certain categories of state laws. Despite these concessions, the revised proposal still faced criticism for containing language that would undermine state laws deemed to place an "undue or disproportionate burden" on AI systems.
Ultimately, in a decisive 99-1 Senate vote, the moratorium was stripped from the budget bill, with even Sen. Cruz joining the overwhelming majority. This outcome represented a significant victory for states' rights advocates but leaves unresolved the question of how to balance national interests in AI development with legitimate state concerns about protecting citizens.
The NIST Standards Approach
A more promising federal pathway has emerged in recent congressional hearings, with growing bipartisan support for leveraging the National Institute of Standards and Technology (NIST) to develop technical standards for AI systems. This approach has proven successful in adjacent domains like cybersecurity and privacy, where the NIST Cybersecurity Framework has achieved widespread voluntary adoption across industries without imposing heavy-handed regulation.
NIST has already developed the AI Risk Management Framework (AI RMF 1.0), which provides a common vocabulary and methodology for identifying, assessing, and mitigating AI risks. The framework emphasizes a flexible, context-specific approach that can accommodate rapid technological changes while still establishing important guardrails.
The NIST standards approach offers several advantages: it leverages multi-stakeholder input from industry, academia, civil society, and government; creates technically sound, practical guidelines; and balances innovation with protection. For organizations already familiar with NIST frameworks for cybersecurity and privacy compliance—particularly in regulated sectors like healthcare, defense, and financial services—this approach provides continuity and integration with existing governance structures.
The Federalism Debate: National Strategy or 'Californication?'
A growing debate has emerged in Congress around whether the U.S. needs a unified national approach to AI regulation. A House Judiciary Subcommittee hearing on Sept. 18, 2025, titled "AI at a Crossroads: A Nationwide Strategy or Californication?" examined how the current patchwork of state regulations might impact innovation and impose costs on the AI industry.
The hearing highlighted a pivotal moment for AI regulation in the United States. Without a unified national strategy, fragmented state laws risk hindering innovation and slowing economic progress. As the debate over AI evolves, the hearing's outcomes could shape the direction of future technology and policy decisions across the country.
Several Democratic governors, including Colorado's Jared Polis, Connecticut's Ned Lamont, and New York's Kathy Hochul, have expressed concern about the challenges posed by varying state regulations. As Governor Lamont noted, "I just worry about every state going out and doing their own thing, a patchwork quilt of regulations," and the potential burden this creates for AI development.
Republican leaders have been equally vocal about the issue, though often divided on the approach. Republican Senator Marsha Blackburn of Tennessee has emphasized that states must retain their ability to protect citizens until comprehensive federal legislation is in place, stating, "Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can't block states from making laws that protect their citizens."
International Approaches
While U.S. states navigate their regulatory approaches, other nations have taken more coordinated action:
- European Union: The EU has established the most comprehensive regulatory framework globally through its AI Act, which takes a risk-based approach. The legislation categorizes AI systems based on risk levels, from minimal to unacceptable risk, with corresponding requirements and prohibitions.
- United Kingdom: The UK has pursued a more flexible approach with its National AI Strategy, followed by a policy white paper titled "AI Regulation: A Pro-Innovation Approach." This framework is designed to be agile and iterative, recognizing the rapid evolution of AI technologies.
- Canada: Canada has implemented the Artificial Intelligence and Data Act (AIDA), which focuses on managing risks associated with high-impact AI systems while supporting innovation.
- China: China has taken a more assertive regulatory stance with requirements for security assessments and algorithm registrations, particularly for generative AI.
- Japan: Japan has pursued a less restrictive approach focused on voluntary guidelines and principles, emphasizing human-centric AI development.
Key Statistics and Trends
The scale and pace of AI legislation at the state level has been unprecedented:
- Over 1,080 AI-related bills were introduced across states in 2024-2025
- 113 AI laws were enacted in 2024 alone
- 73 new AI laws were passed in 2025 across 27 states
- 46 states have enacted laws addressing intimate/sexual deepfakes
- 45 states have criminalized AI-generated CSAM
- 28 states have political deepfake regulations
- 34 states are actively studying AI through task forces or commissions
The Path Forward
The contrasting approaches between state-level regulation in the U.S. and more unified frameworks adopted by other nations highlights the fundamental tension between fostering innovation and ensuring responsible AI development. The state-by-state approach—sometimes called "Californication" due to California's outsized influence—raises questions about whether companies will effectively be forced to comply with the strictest state standards nationwide.
With California leading in total AI laws enacted (13 in 2025 alone), followed by Texas (8), Montana (6), Utah (5), and Arkansas (5), the regulatory landscape continues to evolve rapidly. Four states—Nevada, Montana, North Dakota, and Texas—which were not in legislative session in 2024, are expected to consider significant AI legislation in 2025-2026.
As the debate continues, one thing is clear: effective AI governance requires balancing innovation with accountability, providing appropriate protections without stifling technological progress. With state legislatures scheduled to reconvene in early 2026 and hundreds more bills expected, the AI regulatory landscape will continue to evolve rapidly in the coming years.
Related Resources
- California's AI Laws: A Detailed Breakdown
- NIST AI Risk Management Framework
- The Rise of Robot Hackers: AI in Cyber Attacks
- NIST Guidelines on Adversarial AI: What You Need to Know
- Frischling's Analysis: AI Regulatory Trends from FiscalNote
- NCA Cybersecurity Report: AI Threats on the Horizon
- Phishing 2.0: How AI is Revolutionizing Social Engineering Attacks