Back to Posts

Colorado AI Act Compliance Guide

Jan. 30, 2026

Colorado capitol buidling representing Colorado AI Act regulation

Last Updated: Jan. 31, 2026

The Colorado Artificial Intelligence Act (SB 24-205) represents the first comprehensive state-level AI regulation in the United States. Signed by Gov. Jared Polis on May 17, 2024, the law establishes sweeping consumer protection requirements for developers and deployers of high-risk AI systems. The effective date has been delayed to June 30, 2026, following a failed special legislative session in August 2025, giving the Colorado General Assembly time to consider amendments during its 2026 regular session.

Unlike California's targeted approach through multiple narrow bills, Colorado took a comprehensive risk-based framework modeled partly on the EU AI Act. The law focuses on preventing algorithmic discrimination in AI systems that make or substantially influence "consequential decisions" affecting Colorado consumers.

Current Status and Legislative Uncertainty

The Colorado AI Act has faced significant political headwinds since its passage. Gov. Polis signed the bill reluctantly, immediately urging lawmakers to revise it before implementation. In August 2025, a six-day special legislative session produced four competing amendment bills but ultimately collapsed after technology companies objected to proposed liability provisions. Senate Majority Leader Robert Rodriguez's "AI Sunshine Act" (SB 4), which would have narrowed the law while maintaining core consumer protections, failed to gain consensus.

On Aug. 28, 2025, Gov. Polis signed SB 25B-004, which delays the effective date from Feb. 1, 2026, to June 30, 2026. This extension allows the 2026 regular legislative session, which began in January 2026, to consider substantive amendments. Possible changes may include narrowing the definition of "high-risk AI system," reducing deployer obligations, expanding exemptions, or shifting more responsibility to developers.

Despite the delay, the law's core framework remains intact. The American Bar Association noted in November 2025 that "nothing fundamental changed" despite intense lobbying by over 150 industry representatives during the special session. All core provisions survived: risk assessments, impact assessments, transparency requirements, and the duty of reasonable care.

Who Is Covered

The law distinguishes between two categories of regulated entities:

Developers are persons doing business in Colorado that develop or intentionally and substantially modify an AI system. A substantial modification means a deliberate change that creates new reasonably foreseeable risks of algorithmic discrimination. Developers must exercise reasonable care to protect consumers from discrimination risks and provide documentation to deployers.

Deployers are persons doing business in Colorado that deploy a high-risk AI system. This includes any company using AI tools to make decisions affecting Colorado consumers, even if the company is headquartered elsewhere. Deployers have distinct obligations for risk management, impact assessment, and consumer notice.

Companies that both develop and deploy AI systems are subject to both sets of requirements, though the law provides that a developer acting as a deployer need not duplicate documentation if the system is not provided to an unaffiliated entity.

Defining High-Risk AI Systems

The law's obligations apply only to "high-risk AI systems," defined as any AI system that, when deployed, makes or is a substantial factor in making a "consequential decision." A consequential decision is one with a material legal or similarly significant effect on the provision, denial, cost, or terms of:

Category Examples
Education Enrollment decisions, scholarship eligibility, academic placements
Employment Hiring, promotions, terminations, compensation, work assignments
Financial Services Loan approvals, credit limits, interest rates, account terms
Healthcare Treatment recommendations, coverage decisions, appointment access
Housing Rental applications, tenant screening, lease terms, rent pricing
Insurance Policy eligibility, coverage amounts, premium pricing, claims decisions
Government Services Benefits eligibility, licensing, permit approvals
Legal Services Case assessment, resource allocation, procedural decisions

Certain systems are expressly excluded from the high-risk definition unless they directly make consequential decisions: anti-fraud systems not using facial recognition, cybersecurity tools, calculators, anti-malware software, databases, and narrow procedural assistants that don't replace human assessment.

Algorithmic Discrimination

The core purpose of the law is preventing algorithmic discrimination, defined as any condition in which an AI system results in unlawful differential treatment or impact that disfavors an individual or group based on protected characteristics. Colorado's protected characteristics include:

Age, color, disability, ethnicity, genetic information, limited proficiency in English, national origin, race, religion, reproductive health, sex, veteran status, and any other classification protected under Colorado or federal law.

Importantly, the law incorporates both disparate treatment and disparate impact theories. An AI system can cause algorithmic discrimination even without discriminatory intent if it produces outcomes that disproportionately affect protected groups. This has created tension with federal Executive Order 14281, issued in early 2025, which directs federal agencies to deemphasize disparate-impact enforcement. Companies operating nationally will need to navigate potentially conflicting requirements.

Developer Requirements

Starting June 30, 2026, developers of high-risk AI systems must exercise reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. Developers must:

Provide Documentation to Deployers: Developers must furnish deployers with comprehensive documentation including: a general statement describing reasonably foreseeable uses and known harmful or inappropriate uses; high-level summaries of training data and data governance measures; documentation of how the system was evaluated for algorithmic discrimination; intended use cases, foreseeable limitations, and technical capabilities; and artifacts such as model cards, dataset cards, or impact assessments necessary for deployers to complete their own assessments.

Publish Public Statements: Developers must maintain a clear, readily available statement on their website or in a public use case inventory describing the types of high-risk AI systems they make available and how they manage known or reasonably foreseeable risks of algorithmic discrimination. This statement must be updated within 90 days of any intentional and substantial modification to a system.

Report Discrimination Risks: Within 90 days of discovering or receiving a credible report that a high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination, developers must notify the Colorado Attorney General and all known deployers of the system.

Respond to Attorney General Requests: Upon request, developers must provide specified documentation to the Attorney General within 90 days. Developers may designate such documentation as proprietary to prevent disclosure under the Colorado Open Records Act, and sharing information with the Attorney General does not waive attorney-client privilege.

Deployer Requirements

Deployers face more extensive operational obligations. Starting June 30, 2026, deployers must:

Implement a Risk Management Policy: Deployers must establish and maintain a risk management policy and program that specifies and incorporates the principles, processes, and personnel used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The policy must be "reasonable" considering the NIST AI Risk Management Framework or another nationally or internationally recognized framework such as ISO/IEC 42001.

Complete Impact Assessments: Deployers must conduct an initial impact assessment within 90 days of the law's effective date and repeat the assessment at least annually and within 90 days of any intentional and substantial modification. The impact assessment must include: a statement of purpose, intended use cases, deployment context, and benefits; an analysis of whether deployment poses risks of algorithmic discrimination and steps taken to mitigate those risks; a description of input and output data categories; metrics used to evaluate performance and known limitations; transparency measures; and post-deployment monitoring and safeguards. Impact assessments must be retained for at least three years following final deployment and provided to the Attorney General upon request within 90 days.

Conduct Annual Reviews: Deployers must annually review each deployed high-risk AI system to ensure it is not causing algorithmic discrimination.

Make Public Disclosures: Deployers must publish a readily available statement on their website describing the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks, and the nature, source, and extent of information collected and used by such systems.

Consumer Notice Requirements

The law creates significant consumer-facing disclosure obligations for deployers:

Pre-Decision Notice: Before deploying a high-risk AI system to make or substantially factor into a consequential decision concerning a consumer, the deployer must: notify the consumer that a high-risk AI system is in use; disclose the purpose of the system and the nature of the consequential decision; provide contact information for the deployer; include a plain-language description of the system; and explain how to access the deployer's public statement.

Adverse Decision Notice: If a high-risk AI system makes or substantially contributes to an adverse decision concerning a consumer, the deployer must provide: a statement of the principal reasons for the adverse decision; the degree and manner in which the AI contributed to the decision; the types of data processed and data sources used; an opportunity to correct any incorrect personal data; and an opportunity to appeal the decision, with human review when technically feasible.

Opt-Out Information: Where applicable, deployers must inform consumers of their right to opt out of personal data processing for profiling under the Colorado Privacy Act.

AI Interaction Disclosure: Any entity doing business in Colorado that deploys an AI system intended to interact with consumers must ensure disclosure that the consumer is interacting with an AI system. This requirement applies to all consumer-facing AI systems, not just high-risk systems.

Exemptions

The law provides several exemptions from specific requirements:

Small Business Deployers: Deployers employing fewer than 50 full-time equivalent employees throughout the deployment period are exempt from the requirements to publish a website statement, conduct impact assessments, and implement a risk management policy if: the deployer does not train the AI system on its own data (and the system continues to learn from sources other than the deployer's data); the deployer uses the system only for its intended purpose as specified by the developer; and the deployer provides consumers with any impact assessment furnished by the developer, which must include information the deployer would have included if conducting its own assessment.

Federally Regulated Systems: AI systems approved by federal agencies such as the FDA, FAA, or FHFA are fully exempt. Systems complying with equivalent or stricter federal standards, such as those from the Federal Office of the National Coordinator for Health Information Technology, are also exempt.

Financial Institutions: Banks, out-of-state banks, Colorado-chartered credit unions, federal credit unions, out-of-state credit unions, and their affiliates and subsidiaries are in full compliance if subject to examination by a state or federal prudential regulator under published AI guidance or regulations that meet the law's criteria.

Insurers: Insurers, fraternal benefit societies, and developers of AI systems used by insurers are in full compliance if subject to Colorado's existing laws governing insurers' use of external consumer data, algorithms, and predictive models (C.R.S. 10-3-1104.9).

HIPAA-Covered Entities: Healthcare entities covered under HIPAA that provide AI-generated healthcare recommendations requiring a healthcare provider's action are exempt, provided the system is not classified as high-risk.

Federal Research and Contracts: Work performed for the Department of Defense, NASA, the Department of Commerce, or under federal research programs is generally exempt unless the AI is used specifically for employment or housing decisions.

Enforcement and Penalties

The Colorado Attorney General has exclusive authority to enforce the law. Violations constitute unfair trade practices under the Colorado Consumer Protection Act, with maximum civil penalties of $20,000 per violation. Violations are counted separately for each consumer or transaction involved, meaning widespread use of a non-compliant system could result in substantial aggregate penalties.

There is no private right of action under the Colorado AI Act.

Affirmative Defense and Safe Harbors

The law provides significant compliance incentives through rebuttable presumptions and affirmative defenses:

Rebuttable Presumption: Developers and deployers who comply with all applicable requirements receive a rebuttable presumption that they exercised reasonable care. This means the Attorney General would need to affirmatively prove the entity did not exercise reasonable care despite technical compliance.

Affirmative Defense: A developer, deployer, or other person has an affirmative defense to an enforcement action if they: are in compliance with the latest version of the NIST AI Risk Management Framework, ISO/IEC 42001, another nationally or internationally recognized AI risk management framework, or a framework designated by the Attorney General; and took specified measures to discover and correct violations, such as through feedback processes, adversarial testing ("red teaming"), or internal review.

These provisions strongly encourage adoption of established frameworks like NIST AI RMF and ISO/IEC 42001, which align closely with the law's requirements for documentation, risk assessment, monitoring, and governance.

Comparison to Other Frameworks

The Colorado AI Act sits between California's narrow, targeted approach and the EU AI Act's comprehensive framework:

Feature Colorado AI Act California Laws EU AI Act
Scope High-risk systems making consequential decisions Targeted: transparency, deepfakes, training data Risk-based tiers from unacceptable to minimal
Primary Focus Algorithmic discrimination Transparency and disclosure Safety and fundamental rights
Impact Assessments Required annually Required under CCPA ADMT regulations Required for high-risk systems
Private Right of Action No Yes, under some laws Limited
Framework Alignment NIST AI RMF, ISO 42001 Various Harmonized EU standards

Compliance Preparation Steps

1. Inventory AI Systems: Identify all AI systems used across your operations, especially those influencing decisions in employment, lending, housing, education, healthcare, insurance, government services, or legal services. Determine which systems meet the definition of "high-risk" based on their role in consequential decisions.

2. Assess Your Role: Determine whether you are a developer, deployer, or both for each high-risk system. If you purchase or license AI tools from vendors, you are likely a deployer. If you build or substantially modify AI systems, you may be a developer.

3. Obtain Developer Documentation: For deployed systems, request from your vendors the documentation required under the law: use case statements, training data summaries, limitation disclosures, evaluation methods, and materials necessary for impact assessments. Build these requirements into vendor contracts.

4. Develop a Risk Management Program: Implement policies and procedures aligned with NIST AI RMF or ISO/IEC 42001. Document the principles, processes, and personnel responsible for identifying, documenting, and mitigating algorithmic discrimination risks.

5. Conduct Impact Assessments: Develop templates and processes for conducting required impact assessments. Consider whether a single assessment can cover comparable systems. If you conduct impact assessments under other laws (such as the EU AI Act or CCPA regulations), evaluate whether they satisfy Colorado's requirements.

6. Prepare Consumer Disclosures: Draft pre-decision notices, adverse decision notices, and website statements. Establish processes for providing these disclosures at the appropriate times and through appropriate channels.

7. Monitor and Review: Establish procedures for annual reviews of deployed high-risk systems and ongoing monitoring for algorithmic discrimination. Create incident response protocols for reporting discrimination risks within required timeframes.

8. Monitor Legislative Developments: The 2026 Colorado legislative session may produce amendments before the June 30 effective date. Track developments and be prepared to adjust compliance plans if the law's requirements change.

Important Dates

Date Event
May 17, 2024 Gov. Polis signs SB 24-205 (original Colorado AI Act)
Aug. 28, 2025 Gov. Polis signs SB 25B-004, delaying effective date
Jan. 2026 Colorado General Assembly regular session begins; amendments expected
June 30, 2026 Current effective date for all requirements (subject to legislative change)
Sept. 28, 2026 Deadline for initial deployer impact assessments (90 days after effective date)

Additional Resources

For official text and legislative history, see the Colorado General Assembly bill page for SB 24-205. The Colorado Attorney General's office maintains an AI resources page that will be updated as rulemaking proceeds.

The NIST AI Risk Management Framework provides detailed guidance on implementing the governance structures contemplated by the law. For international alignment, ISO/IEC 42001:2023 establishes an AI management system standard that addresses many of the law's requirements.