California’s Newsom Signs 18 AI Bills But Vetoes SB 1047

Key highlights this week:

  • We’re currently tracking 694 bills in 45 states related to AI this year, 90 of which have been enacted into law.

  • California Gov. Newsom (D) signed into law 13 additional AI-related bills, joining the 7 he signed earlier in the month, making the total 18 bills enacted this year. A federal judge has already blocked the enforcement of a political deepfake bill signed in the first batch of bills. 

  • States are beginning to pre-file bills for the 2025 session already, and Connecticut Sen. Maroney (D) plans to soon unveil next year’s version of his comprehensive AI regulatory bill after a veto threat stopped this year’s bill. 

California solidified itself as the leading state in the volume of AI-related laws after Gov. Newsom signed 18 additional AI bills into law in September. However, Newsom vetoed the most high-profile AI bill of the year (CA SB 1047) aimed at the safety of frontier AI models. Let’s take a look at which bills were enacted, which were vetoed, and what it means for AI policy in 2025. 

First, we must address the veto of Sen. Wiener’s SB 1047. As we’ve chronicled through this publication, the Safe and Secure Innovation for Frontier AI Models Act would have required developers of the most advanced AI models to implement safety and security protocols and have the ability to shut down closed models in case of catastrophic safety concerns. Gov. Newsom’s veto of the bill did not come as a huge surprise after an intense lobbying campaign against the bill from the tech industry. Additional late-stage opposition came from the California delegation in Congress, led by Speaker Emerita Nancy Pelosi — whose congressional seat Sen. Wiener has his eye on when the former speaker retires. Pelosi would reportedly prefer that her daughter succeed her in Congress. 

What was a surprise was Gov. Newsom’s stated reasoning for vetoing the bill. In his veto message, the governor wrote “By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047.” The governor criticizes the bill for not going far enough to regulate the safety of AI models because it only applies to the largest, most expensive models and not smaller models. This was particularly puzzling since throughout the year, this high-profile bill went through several major rounds of amendments in negotiations with industry, but it was never reported that the governor participated in these negotiations or let his opinions known to those working on the bill. 

Nonetheless, the governor signed into law nearly all the additional AI-related bills sent to his desk. In total, California enacted 18 AI-related laws this month. Unlike SB 1047, which was a model-based approach to regulating AI, the bills that were signed into law this year in California focused on the use and conduct of AI, either putting new requirements on those using AI or ensuring that activities that are currently illegal are covered when using AI. Last month, we published a detailed overview of the two dozen or so AI-related bills sent to the governor, but here’s a quick rundown of the 18 bills signed into law in California. 

One of these new laws applies to the developers (as opposed to the deployers) of AI. The law (CA AB 2013) goes into effect in 2026 and will require generative AI developers to disclose on their website documentation that provides, among other requirements, a summary of the datasets used in the development and training of the AI technology or service. Notably, the law applies to both original developers and those making “substantial modifications” to a generative AI technology or service. 

Another new law (CA AB 2885) establishes a uniform definition for “artificial intelligence” for existing provisions in state law relating to studies of AI and inventories of AI use in state government. The law defines AI to mean an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

Like most states, California's largest number of new AI-related laws address deepfakes and synthetic materials. Laws on sexual deepfakes include restrictions on non-consensual sexual deepfakes (CA SB 896, CA SB 926), California becomes the 19th state to include AI-generated as child sexual abuse material (CA AB 1381, CA SB 1381), and new requirements for social media companies to remove reported sexual deepfake media on their platforms (CA SB 981). Gov. Newsom also signed three bills into law that address political deepfakes during an election, mandating disclosures for political deepfakes (CA AB 2355), requiring online platforms to label and block deceptive political deepfakes (CA AB 2839 — but see note below about a court injunction blocking this law), and prohibiting political deepfakes in a time period around an election (CA AB 2839). California was one of the first states to address sexual (CA AB 602) and political deepfakes (CA AB 730) in 2019. 

New deepfake laws focused on transparency require generative AI providers to make AI detection tools available to users (CA SB 942) and auto-dialing services to announce whether a prerecorded phone message is generated with AI (CA AB 2905). Gov. Newsom also signed two bills into law to protect entertainers from misuse of their digital replicas, creating liability for the use of a deceased personalities likeness without consent (CA AB 1836) and making contracts to use a performer's digital replica unenforceable under certain conditions (CA AB 2602).  

New laws address the health care industry by requiring communications to a patient by a health care provider that uses generative AI to disclose the AI use and give the patient the option to communicate directly with a human (CA AB 3030) and have health insurers that utilize AI in decision making to follow additional requirements (CA SB 1120). In the education space, new laws will consider AI literacy in the K-12 curriculum (CA AB 2876) and establish a working group to provide guidance on AI in public schools (CA SB 1288). 

Besides SB 1047, Gov. Newsom wielded his veto pen to kill regulatory proposals in two distinct categories: making it harder for government agencies to utilize AI and restrictions on autonomous vehicles. Newsom vetoed a bill (CA SB 892) that would have instructed a government agency to create an “automated decision procurement standard” and prohibit any state agencies from using an automated decision tool until those standards had been adopted. Gov. Newsom also vetoed a bill (CA SB 1220) that would have blocked state and local government agencies from contracting with call centers that utilized AI or automated decision systems (and thus eliminating jobs or core job functions). 

Finally, Gov. Newsom sided with the autonomous vehicle (AV) industry by vetoing a proposal (CA AB 2286) to significantly limit the testing of self-driving trucks and a bill (CA AB 3061) that would have created new data reporting requirements for AV companies.

By our count, California has now enacted 24 separate laws and an executive order related to AI since 2018. While California is certainly the leader in AI policy by volume, the state has yet to enact broad laws on automated decision-making as Colorado (CO SB 205) did this year. However, lawmakers in the Golden State might be leaving that issue up to regulators, as the California Privacy Protection Agency is in the early stages of promulgating rules related to businesses' use of “automated decision making technology.”

Lawmakers from other states will take time to study the dozens of AI-related laws now on the books in California, but will Sen. Wiener be back next year with a broad AI safety bill? “It’s too soon to say exactly what we’re going to do,” Wiener told the press. “We’re absolutely committed to promoting AI safety.”

Recent Developments

In the News

  • Doctored Image Appears to Violate Indiana Electoral Deepfake Law: On Monday, Republican gubernatorial candidate Mike Braun released a television ad containing a doctored image of his Democratic opponent Jennifer McCormick's supporters. The image appeared to violate the state’s new law (IN HB 1133) prohibiting the use of deepfakes in political advertising without a disclaimer, which allows the person depicted to file a civil action. The Braun campaign later added a disclaimer to the ad. 


Major Policy Action 

  • California: On Wednesday, a federal judge granted a preliminary injunction to block the state’s newly enacted law (CA AB 2839), which requires online platforms to label and block deceptive political deepfakes. The judge found that the law likely violates the First Amendment, writing “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”

  • Alabama: On Thursday, the AI Task Force reviewed preliminary policy recommendations ahead of a report due in November. The group is focused on policy frameworks for responsible AI use by state government, not necessarily statutory solutions. The task force is expected to finalize its recommendations in late October with a final report in November.


Notable Proposals
 

  • Connecticut: Senator James Maroney (D) says he has been working on a new comprehensive artificial intelligence regulation bill for next session that will be unveiled next month. His bill this session (CT SB 2) passed the Senate but stalled out in the House due to a lack of support from Gov. Ned Lamont (D) who expressed concern at the state being an outlier.

  • New Jersey: Senator Troy Singleton (D) has filed a bill (NJ SB 3742) that would require artificial intelligence companies to conduct safety tests and report results to the Office of Information Technology. Earlier this year Singleton sponsored a bill (NJ SB 1438) that would have developed guidelines for AI use in state government but also created a task force to study artificial intelligence and create an artificial intelligence bill of rights, but that bill has yet to move.

Next
Next

Summer School: Lawmakers Race to Catch Up with AI