Back to Posts

New York AI Laws Compliance Guide

Jan. 28, 2026

New York capitol representing New York RAISE Act and AI regulation

Last Updated: Jan. 31, 2026

New York has emerged as a national leader in artificial intelligence regulation through two distinct but complementary frameworks. The Responsible AI Safety and Education Act (RAISE Act), signed by Gov. Kathy Hochul on Dec. 19, 2025, establishes the nation's most stringent reporting requirements for frontier AI developers. Meanwhile, New York City's Local Law 144, in effect since July 2023, pioneered mandatory bias audits for automated hiring tools. Together, these laws position New York as the strictest U.S. jurisdiction for AI oversight.

The RAISE Act takes effect Jan. 1, 2027, creating a bicoastal pillar with California's similar Transparency in Frontier AI Act (SB 53). While the two laws share a common foundation, New York's version includes notably stricter incident reporting timelines and creates a new oversight office with rulemaking authority within the Department of Financial Services.

The RAISE Act: Frontier AI Safety

The RAISE Act targets the most advanced AI systems, focusing specifically on catastrophic risks rather than the algorithmic discrimination concerns addressed by Colorado's AI Act. The law requires developers of "frontier models" to implement safety protocols, report incidents rapidly, and submit to state oversight.

Who Is Covered

The RAISE Act applies to "large developers" of "frontier models." Following chapter amendments expected to be finalized in early 2026, a large developer is a company with annual revenue exceeding $500 million that has trained at least one frontier model. This definition aligns with California's approach and effectively covers the major AI companies: OpenAI, Anthropic, Google, Meta, and similar firms developing cutting-edge AI systems.

A "frontier model" is defined as an AI model trained using greater than 10²⁶ integer or floating-point operations (FLOPs). Models created through "knowledge distillation" from a frontier model may also qualify. Accredited colleges and universities engaged in academic research are exempt from coverage.

Most companies will not be directly regulated as large developers. However, the law's effects will ripple through vendor relationships, procurement practices, and risk management expectations across the AI supply chain.

Critical Harm Definition

The RAISE Act focuses on preventing "critical harm," defined as incidents caused or materially enabled by a frontier model that result in:

Death or serious injury of 100 or more people, or at least $1 billion in damage to property rights, caused through: the creation or use of chemical, biological, radiological, or nuclear weapons; or AI models engaging in conduct that acts with no meaningful human intervention and would constitute a crime under the New York Penal Code requiring intent, recklessness, or gross negligence, including solicitation or aiding and abetting of such crimes.

This narrow focus on catastrophic outcomes distinguishes New York from Colorado, which addresses algorithmic discrimination in everyday decisions. The RAISE Act is designed to prevent worst-case scenarios while avoiding regulation of routine AI applications.

Safety and Security Protocol Requirements

Large developers must implement, maintain, and publicly disclose a comprehensive safety and security protocol. The protocol must include:

Risk Mitigation Measures: Technical and organizational measures to reduce the risk of critical harm, including detailed strategies for how the developer will handle (not merely "approach") various catastrophic risks.

Cybersecurity Protections: Administrative, technical, and physical safeguards to prevent unauthorized access, theft, misappropriation, or misuse of frontier models and their weights.

Testing Procedures: Detailed testing protocols to evaluate the risk of critical harm, including assessment of potential misuse, modification, or loss of control. Testing procedures must be documented with sufficient detail to permit replication.

Compliance Mechanisms: Designation of senior personnel responsible for safety oversight and internal compliance monitoring.

Safeguards Against Unreasonable Risk: Measures to prevent deploying frontier models that pose an unreasonable risk of critical harm.

The protocol must be published with appropriate redactions for proprietary or security-sensitive information. Unredacted versions must be retained for as long as the model is in use plus five years.

72-Hour Incident Reporting

The RAISE Act's most distinctive feature is its 72-hour incident reporting requirement, significantly stricter than California's 15-day window. Large developers must report safety incidents to the New York Attorney General and the Division of Homeland Security and Emergency Services within 72 hours of either determining that a safety incident has occurred, or learning facts sufficient to establish a reasonable belief that an incident occurred.

A "safety incident" includes known occurrences of critical harm or any of the following events that provide demonstrable evidence of increased risk of critical harm:

Incident Type Description
Autonomous Behavior A frontier model engaging in behavior not requested by a user
Model Weight Compromise Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of model weights
Control Failure Critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model
Unauthorized Use Unauthorized use of a frontier model

Incident reports must include the date of the safety incident, the reasons the incident qualifies as a safety incident, and a short and plain statement describing the event.

For incidents posing an imminent risk of death or serious physical injury, both New York and California require reporting within 24 hours.

Annual Review Requirements

Large developers must conduct an annual review of their safety and security protocols, accounting for changes to the capabilities of their frontier models and industry best practices. Material modifications to the protocol must be publicly disclosed.

Department of Financial Services Oversight Office

In a significant departure from California's approach, the RAISE Act creates a new AI oversight office within the New York Department of Financial Services (DFS). This office will be funded by developer fees and has broad authority to:

Monitor compliance with the RAISE Act. Receive and review safety incident reports, risk assessments, and disclosure statements. Maintain and publish a list of large frontier developers who have filed disclosure statements. Issue annual reports on AI safety risks. Adopt rules and regulations to implement the law, including additional reporting or publication requirements such as post-incident information sharing and transmission of frontier AI frameworks to the office.

California's comparable law does not grant any rulemaking authority to implement its provisions, making New York's oversight framework more robust and adaptive.

Whistleblower Protections

The RAISE Act prohibits developers from preventing or retaliating against employees who disclose information to the Attorney General about activities posing an unreasonable or substantial risk of critical harm. Developers must inform new employees about these protections and post appropriate notices. Employees harmed by violations may seek judicial relief.

Enforcement and Penalties

The New York Attorney General has exclusive enforcement authority. There is no private right of action. Civil penalties for violations include up to $1 million for a first violation and up to $3 million for subsequent violations. These penalties are significantly lower than the original bill's $10 million and $30 million caps but remain substantial.

Large developers violate the RAISE Act if they knowingly make false or materially misleading statements or omissions in documents produced under the law. The law also voids any contractual provisions by which a developer seeks to avoid liability or shift liability to other parties.

NYC Local Law 144: Automated Employment Decision Tools

Separately from the RAISE Act, New York City's Local Law 144 has regulated automated employment decision tools (AEDTs) since enforcement began on July 5, 2023. This law pioneered mandatory bias audits for AI hiring tools and serves as a model for similar legislation across the country.

Scope and Coverage

Local Law 144 applies to employers and employment agencies that use automated tools to substantially assist or replace discretionary decision-making for hiring or promotion decisions affecting candidates or employees who reside in New York City. The law applies regardless of where the employer is headquartered.

An AEDT is defined as a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that generates simplified outputs such as scores, classifications, or recommendations used to substantially assist or replace discretionary employment decisions.

Bias Audit Requirements

Before using an AEDT, employers and employment agencies must ensure the tool has undergone an independent bias audit within the past year. The audit must evaluate the tool's disparate impact on candidates or employees based on protected categories including sex, race, and ethnicity.

The audit must calculate selection rates and impact ratios for different demographic groups. When sample sizes permit, intersectional categories (such as Asian women or Black men) must also be examined. Employers must publish a summary of the most recent bias audit results on their website, including the date of the audit, the source and explanation of data used, and the number of individuals in unknown categories.

Vendors may proactively conduct bias audits of their tools, but employers remain responsible for ensuring compliance. Many organizations coordinate with vendors to gather necessary data and documentation.

Notice Requirements

Employers must notify candidates at least 10 business days before using an AEDT in their evaluation. The notice must communicate that an automated tool will be used, explain the job qualifications or characteristics being assessed, describe the data being collected, and provide instructions for requesting a reasonable accommodation or alternative assessment process.

Notice may be provided through the employment section of the company's website, in a job posting, or by mail or email. For current employees being evaluated for promotion, notice may be given through written policies or procedures.

Employers must also disclose on their websites their AEDT data retention policy, the type of data collected for the AEDT, and the source of the data.

Enforcement and Penalties

The NYC Department of Consumer and Worker Protection (DCWP) enforces Local Law 144. Penalties range from $500 for a first violation to $1,500 for subsequent violations on the same day, potentially reaching $10,000 per week of continued non-compliance. Failing to conduct a bias audit and failing to provide required notices are separate violations.

A December 2025 audit by the New York State Comptroller found that DCWP's enforcement has been limited, with only two AEDT-related complaints received during a two-year period. The Comptroller identified 17 instances of potential non-compliance among companies DCWP had reviewed and recommended improvements to complaint intake and enforcement processes.

Comparison: New York vs. California vs. Colorado

Feature NY RAISE Act CA SB 53 (TFAIA) CO AI Act
Effective Date Jan. 1, 2027 Jan. 1, 2026 June 30, 2026
Primary Focus Frontier AI catastrophic risk Frontier AI transparency Algorithmic discrimination
Coverage Large developers ($500M+ revenue) Large frontier developers ($500M+) All high-risk AI deployers
Incident Reporting 72 hours 15 days 90 days (discrimination)
Oversight Authority DFS office with rulemaking OES (no rulemaking) AG enforcement only
Maximum Penalty $3M (subsequent) $1M per violation $20K per violation
Private Right of Action No No No

Compliance Preparation

For Frontier AI Developers

Assess Coverage: Determine whether your company meets the large developer threshold ($500M+ revenue) and whether any of your models qualify as frontier models (10²⁶+ FLOPs). Track the chapter amendments expected in early 2026 for final definitional clarity.

Develop Safety Protocols: Create comprehensive safety and security protocols addressing critical harm risks, cybersecurity protections, testing procedures, and compliance mechanisms. Document protocols in sufficient detail to permit third-party replication of testing procedures.

Establish Incident Response: Build internal workflows capable of identifying, evaluating, and reporting safety incidents within 72 hours. Distinguish between technical bugs and reportable safety incidents. Create clear escalation paths to senior personnel and legal counsel.

Prepare for Oversight: Anticipate registration requirements and fee assessments from the DFS oversight office. Monitor rulemaking proceedings for additional reporting obligations.

Implement Whistleblower Protections: Update employee handbooks, onboarding materials, and workplace notices to inform employees of their rights under the RAISE Act.

For Companies Using AI Hiring Tools in NYC

Inventory AEDTs: Identify all automated tools used in hiring or promotion decisions affecting NYC residents. Determine which tools meet the AEDT definition.

Conduct or Obtain Bias Audits: Ensure each AEDT has been independently audited within the past year. Coordinate with vendors to obtain audit reports and necessary data. Schedule annual re-audits.

Publish Required Disclosures: Post bias audit summaries on your employment website. Include data retention policies, data types collected, and data sources.

Update Candidate Communications: Modify job postings, application processes, and email templates to provide required 10-day advance notice to candidates.

For All Companies Using AI in New York

Vendor Diligence: Ask AI vendors whether their models fall within frontier definitions and how they manage safety risks. Request documentation of safety protocols and incident history.

Contract Review: Consider whether AI safety disclosures or incident-notification provisions should be added to procurement agreements. Note that the RAISE Act voids provisions by which developers attempt to shift liability.

Monitor Federal Developments: Track the Trump administration's efforts to preempt state AI laws and potential legal challenges to the RAISE Act. Maintain flexibility in compliance planning.

Important Dates

Date Event
July 5, 2023 NYC Local Law 144 enforcement begins
June 2025 New York Legislature passes RAISE Act
Dec. 19, 2025 Gov. Hochul signs RAISE Act
Early 2026 Chapter amendments expected to finalize RAISE Act text
Jan. 1, 2027 RAISE Act takes effect

Additional Resources

For the official RAISE Act text and legislative history, see the New York State Senate bill page for S6953B. Gov. Hochul's signing announcement provides additional context on the law's intent and agreed-upon amendments.

For NYC Local Law 144, the Department of Consumer and Worker Protection AEDT page provides the full law text, final rules, and frequently asked questions. The DCWP FAQ document offers detailed guidance on bias audit requirements and notice procedures.