The RAISE Act: New York Enters the AI Safety Debate


Key highlights this week:

  • We’re tracking 781 bills in 47 states related to AI during the 2025 legislative session.

  • Utah lawmakers sent a bill to the governor that would regulate mental health chatbots.

  • The Montana House passed a digital replica bill. 

  • And New York offers the first major AI safety bill of the year, which is the topic of this week’s deep dive below. 

After focusing on algorithmic discrimination proposals for the last few weeks, we’ve got our first major AI safety bill language of the year, but this time in New York. As you’ll remember, most of the state AI policy attention late last year was squarely focused on Sen. Scott Wiener’s (D) AI safety bill (CA SB 1047) in California until Gov. Gavin Newsom (D) ultimately vetoed the proposal. Last week, Sen. Scott Wiener added language to his placeholder bill (CA SB 53), believed to be this year’s version of the AI safety bill. The new language includes two of the less controversial provisions of SB 1047 — whistleblower protections and a state cloud computing cluster for AI — and so far leaves out the safety reporting mandates of the original bill. It’s largely expected that Sen. Wiener could add safety provisions to the bill once Gov. Newsom’s AI Task Force releases their own recommendations. But while we await those decisions, New York Assemblymember Alex Bores (D) released a detailed AI safety bill (NY AB 6453) inspired by Sen. Wiener’s SB 1047. 

On Wednesday, Asm. Bores introduced the Responsible AI Safety and Education Act (RAISE Act), which aims to limit critical harm from powerful frontier AI models through safety reporting requirements. In a memo accompanying the bill, Asm. Bores wrote that “AI has potential for unimaginable benefits. However, this promise must be managed with care.” In the memo, Asm. Bores outlines what his bill would require of frontier model developers: “[W]hen dealing with one of the most promising and dangerous technologies humans have ever developed, labs need to do four things”:

  1. Have a safety plan;

  2. Have a third-party review that safety plan;

  3. Not fire employees that flag risks; and

  4. Disclose major security incidents.

“It is the bare minimum that New Yorkers expect,” Asm. Bores concludes. I’ve added one additional “thing” to Asm. Bores’ list of requirements for large developers in this bill, which is “don’t deploy unsafe models” (seems pretty important). Now, let’s dig into the details.

Who needs to comply?

Unlike most of the AI-related bills introduced in the states, an AI safety bill applies to the AI model itself and the “developer” of that model. Most AI regulations target the outcomes of the use of AI tools and attempt to regulate the “deployers” of those tools. 

This bill attempts to apply to only the “frontier” AI models (e.g., OpenAI, Google, Anthropic, Meta, xAI) with both a computational and training cost threshold. “Frontier models” are defined as models trained using greater than 10^26 computational operations with compute costs exceeding $100 million. (This is the same computational and dollar thresholds that the final version of CA SB 1047 used last year.)

A model could also qualify as a “frontier model” under the bill if a developer applies “knowledge distillation” to a frontier model. Knowledge distillation is a technique where a smaller AI model is trained to mimic the performance of a larger, more powerful AI model. The smaller model learns from the outputs of the larger model rather than directly from raw data. So in this case, if a developer distills a smaller model from a larger model, then the smaller distilled model is also considered a “frontier model” under this bill. AI developers often use this technique to train the smaller, cheaper versions of their flagship models (e.g., o3‑mini or Gemini 2.0 Flash-Lite). But knowledge distillation is also the technique that OpenAI has accused Chinese developer DeepSeek of using to train its model off ChatGPT.

 A “large developer” is defined in the bill as someone who has trained a frontier model and has spent over $100 million in aggregate on compute costs to train frontier models. Notably, this definition exempts accredited colleges and universities conducting academic research.

Ok, so only the big expensive models. What do those developers need to do?

Have a safety plan 

The RAISE Act would require large developers of frontier AI models to implement and publish a safety and security protocol, including protections against unauthorized access and misuse. A redacted version of the protocol would be available to the public, but the New York Attorney General could access the unredacted version. A review of the written safety and security protocol would be required annually to account for any changes in the capabilities of the model. 

Have a third party review the safety plan

The proposed bill would require large developers to retain third-party auditors to verify compliance and submit annual reports to the attorney general. Similarly to the safety and security protocol, a redacted version of the independent third-party audits will be made available to the public with a non-redacted version available to the state attorney general (including an updated total of compute costs). 

Not fire employees that flag risks

The RAISE Act would establish whistleblower protections for employees who report AI-related safety risks. The employee must have reasonable cause to believe that the large developer's activities pose an unreasonable or substantial risk of critical harm, regardless of the employer's compliance with applicable law. These whistleblower protections would also apply to contractors and subcontractors of large model developers. And the employees must be informed of these whistleblower protections with conspicuously posted notices. 

Disclose major security incidents [and don’t deploy unsafe models]

The bill would also require large AI developers to disclose safety incidents to the attorney general within 72 hours of discovery. The bill defines a “safety incident” as an incident providing “demonstrable evidence of an increased risk of critical harm” within four categories: 

  1. A frontier model autonomously engaging in behavior other than at the request of a user;

  2. Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model;

  3. The critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or

  4. Unauthorized use of a frontier model.

Finally, under the proposed law, large developers would be prohibited from deploying frontier models if they pose an unreasonable risk of “critical harm,” such as enabling mass casualties (100+ people) or economic damage (over $1 billion). Additionally, the bill clarifies that “a harm inflicted by an intervening human actor shall not be deemed to result from a developer's activities unless such activities made it substantially easier or more likely for the actor to inflict such harm.”

Enforcement 

The state attorney general can enforce the RAISE Act by bringing civil actions for violations of the transparency requirements, including injunctive relief and potential penalties of up to 5% of total compute costs for the first violation and up to 15% for subsequent violations. Violations of the whistleblower protections are limited to $10,000 per employee (awarded to the employee) and employees can seek injunctive relief.  

While he acknowledged that other important concerns with AI exist and should be addressed with separate legislation, Asm. Bores is explicit in the purpose of AB 6453, “This bill just tries to reduce the chance that AI, intentionally or unintentionally, kills us all.”

Recent Developments

Major Policy Action 

  • Kentucky: The Senate passed a measure (KY SB 4) that would regulate the use of AI by state agencies, creating an Artificial Intelligence Governance Committee and requiring disclosures when AI is used to render a decision regarding citizens, in any process or to produce materials to inform a decision, or to produce information accessible by citizens and businesses. The proposal also includes a provision prohibiting political deepfake communications without a disclaimer, despite efforts to strip those sections from the bill.

  • Montana: On Monday, the House overwhelmingly approved a digital replica bill (MT HB 513) that would create a property right in an individual's name, voice, and likeness. Bill sponsor Rep. Jill Cohenour (D) expressed concern about the monetization of a person’s likeness and how even state lawmakers could be subject to having their image used.

  • South Dakota: Gov. Larry Rhoden (R) became the latest governor to ban the Chinese-based AI app DeepSeek from being downloaded or used on state-issued devices, joining Iowa, New York, Texas, and Virginia. This week, U.S. Reps. Josh Gottheimer (D-NJ) and Darin LaHood (R-IL) sent a letter to governors and mayors across the country urging them to ban the app from government devices.

  • Utah: Lawmakers have passed a bill (UT SB 332), sending it to the governor for his signature to become law, which would extend the sunset date of the Artificial Intelligence Policy Act that was originally written to expire this May, instead extending the law until July 1, 2027. We wrote about the law last spring when it took effect, noting it requires certain regulated industries or entities engaged in regulated acts to disclose that a consumer is interacting with artificial intelligence. The Legislature also passed a measure (UT HB 452) that would regulate mental health chatbots, requiring disclosures to patients and prohibiting selling health information or user input, or advertising specific products in conversation.

Notable Proposals 

  • Connecticut: On Thursday, the Labor and Public Employees Committee introduced a measure (CT SB 1484) that would regulate the way employers use high-risk artificial intelligence systems to make consequential decisions affecting employees. State agencies would also be prohibited from using high-risk AI to deliver public assistance benefits or to affect rights, civil liberties, safety, or welfare.

  • Iowa: On Monday,  the House Economic Growth and Technology Committee introduced a comprehensive AI bill (IA HSB 294), which would require developers and deployers to use reasonable care to protect individuals from known or reasonably foreseeable risks of algorithmic discrimination with required documentation and disclosures and it would also require disclosure for consumer interactions with AI.

  • Minnesota: Lawmakers have introduced several bills (MN HF 1606, HF 1895, SF 1119, SF 1783, SF 2240) that would prohibit a website or application from allowing a user to access, download, or use the website to “nudify”  an image or video, turning it into sexual content. Bill sponsor Sen. Erin Maye Quade (D) says she plans to share her proposal with lawmakers in other states who may not be aware of the technology. 

  • New York: Asm. Bores (D) also introduced a bill (NY AB 6540) on Wednesday requiring generative AI providers to apply provenance data to synthetic content. Asm. Bores has embraced a global provenance standard known as C2PA and is considering a bill for future sessions that would require cameras and smartphones to have C2PA built in.

  • Nevada: A bill introduced this week (NV AB 325) would prohibit a public utility from using AI to make a final decision regarding whether to reduce or shutdown utility service in response to a disaster or emergency. A Montana bill (MT SB 212) also had a provision relating to public utilities and AI, requiring the capability of a full shutdown of any artificial intelligence system controlling critical infrastructure, but that was modified in an amendment to require a risk management policy. 

Previous
Previous

The AI Balancing Act: States Torn Between Regulation and Innovation

Next
Next

Trendsetter Alert: California's 33 New AI Bills Explained