California Narrows Its Model-Level AI Proposal

Key highlights this week:

  • We’re currently tracking 643 bills in 45 states (plus another 113 congressional bills) related to AI this year, 54 of which have been enacted into law. 

  • Utah quietly moves forward to implement its AI Policy Act by issuing regulations. 

  • Pennsylvania lawmakers advance a sexual deepfake bill, potentially joining the 20 additional states that have enacted similar legislation. 

  • Lawmakers in California amended their landmark model-level regulation bill, which is the focus of this week’s analysis below.


Artificial intelligence legislation in California has been on the move recently. The Golden State has considered over 50 AI-related bills so far, but several have advanced in the last few weeks, including the bill the AI industry has watched most closely — SB 1047. We highlighted SB 1047 earlier this year, which would establish a new state office to regulate large AI models and certify compliance. Having already passed the Senate, the measure was tweaked by sponsor Sen. Scott Wiener (D) this week to address concerns raised by industry groups.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (CA SB 1047) is the clearest legislative example of the model-level approach to AI regulation. Unlike a vast majority of the 600 bills state lawmakers have introduced this year, SB 1047 takes aim not at the use and output of AI tools but at the source of AI itself — the frontier models and the tech juggernauts developing those models. 

The bill seeks to establish a new government agency to regulate large AI models and certify compliance. It would require developers to determine if their AI system qualifies for a limited duty exemption by providing certain safeguards that include security protections, protocols to prevent critical harms, and the ability to perform a full shutdown of the model. Developers would also need to perform capability testing and provide safeguards to prevent users from causing harm to obtain a certification of compliance from the newly created Frontier Office Division. 

The amendment this week retains those provisions but with some changes in response to industry group feedback. Notably, the scope of the bill has changed in an attempt to avoid imposing compliance on smaller model developers. The original bill applied to models trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the same threshold used by the Biden Administration’s Executive Order on AI.

Critics worried that many models could surpass that threshold in the near future. The original bill language would have also applied to models that could be reasonably expected to have “similar or greater performance,” a provision that raised concerns that over time the regulations could ensnare smaller models capable of a similar general capability while performing strictly worse on any given benchmark. To address these concerns, Senator Wiener amended the bill to retain the original computing power threshold but also added a requirement that the cost of the quantity of computing power of the model must exceed $100 million (indexed to the Consumer Price Index starting in 2026). The amended bill removed the “similar or greater performance” models from the definition as well. While this should clarify that the legislation won’t apply to smaller models, supporters of AI regulation argue that limiting regulation to the largest and best finances models waters down the original intent and longevity of the legislation.

To receive a limited duty exemption, developers under the original bill had to “reasonably exclude the possibility” that the model had hazardous capabilities. That standard has been changed to require the developer to “provide reasonable assurance” that the model does not have hazardous capabilities. The definition of “hazardous capability” has also been amended to clarify that it does not apply to models with a limited duty exemption, adds bodily harm and theft or harm to property with a mental state requirement, and indexes the damages threshold to inflation.

Another sticking point focused on “derivative models, which are those that modify existing models or combine them with other models. The bill would absolve developers from many requirements for derivative models. The amended version clarifies that models that fine-tune an existing model using computing power of greater than 25 percent of the quantity of computing power used to train the original model will not constitute a “derivative model.” As AI commentator Zvi Mowshowitz explains the change, “If they change your model using more than 25% of the compute you spent, then it becomes their responsibility, not yours. If they use less than that, then you did most of the work, so you still bear the consequences.”

Proponents of open-source models worried that the provisions requiring developers to have the capability of a full shutdown would only be possible with closed-source models. The amendment clarifies the shutdown requirement does not apply to a “covered model to which access was granted pursuant to a license that was not granted by the licensor on a discretionary basis.” Finally, the civil penalty of 10 percent of the cost of the model was also clarified in the amendment. The bill has been re-referred to the Assembly Committee on Privacy and Consumer Protection for consideration with a hearing scheduled for June 18.

The amendments attempt to balance the need for regulation against compliance costs, with Sen. Wiener arguing that “by focusing its requirements on the well-resourced developers of the largest and most powerful frontier models, SB 1047 puts sensible guardrails in place against risk while leaving startups free to innovate without any new burdens.” But he adds it is still a “work in progress” with opportunities for constructive input from stakeholders. Some have expressed doubts the bill will pass, particularly after Gov. Gavin Newsom warned about overregulating AI in a speech last week, although he has not taken a public stand on the specific bill.


Recent Developments

Major Policy Action 

  • Kentucky: Senate President Robert Stivers appointed the four Senate members of the Artificial Intelligence Task Force, which will address legislative initiatives for consumer protection in AI implementation, as well as recommendations on state government use of AI. The task force was established by resolution this year and will also include four House members, with monthly meetings and a deadline to submit findings and recommendations by December 1. 

  • New York: The Senate passed a bill (NY SB 9104) that would establish the position of chief artificial intelligence officer to develop statewide artificial intelligence policies and governance. The measure would also create an advisory committee for state artificial intelligence policy.

  • Pennsylvania: The Senate Judiciary Committee advanced a bill (PA SB 1213) that adds “artificially generated” depictions to criminal provisions against disseminating sexual images of individuals and minors. Under the proposed law, images that "authentically depict an individual” in sexual conduct that did not occur would be prohibited, allowing prosecutors to press charges even if the image does not technically portray the victim.

  • Utah: The newly-created Office of Artificial Intelligence Policy issued rulemaking for the Artificial Intelligence Learning Laboratory Program established by legislation passed earlier this year (UT SB 149). The program allows selected applicants to experiment with AI using regulation mitigation agreements with the state. 


Notable Proposals

  • New Jersey: This week, Sen. Kristin Corrado introduced a measure (NJ SB 3357) to establish an Artificial Intelligence Advisory Council to study the technology and weigh the costs and benefits for state government use.  The council would be required to issue a report within one year of its first meeting.

  • New York: On Monday, Assemblyman Kenny Burgos introduced the Artificial Intelligence Literacy Act of 2024 (NY AB 10556). The measure adds artificial intelligence literacy to the digital equity competitive grant program for schools and organizations.

Previous
Previous

What States Have Learned from NYC’s AI Hiring Law

Next
Next

Lessons for AI from the Data Privacy Debate