California's Blueprint for Responsible AI Governance

Key highlights this week:

  • We’re tracking 937 bills in 49 states related to AI during the 2025 legislative session.

  • Gov. Younkin vetoes Virginia’s algorithmic discrimination bill. 

  • Utah puts new disclosure requirements on mental health organizations using AI.

  • And South Dakota signs a political deepfake bill into law. 


After vetoing Sen. Wiener’s (D) high-profile AI safety bill (CA SB 1047) last September, California Gov. Newsom (D) formed an AI policy working group to develop responsible guardrails for the deployment of generative AI. Last week, the group released a draft working report that could indicate the ceiling of the type of AI regulation the governor of the nation’s most populous state, and home to many of the AI breakthroughs rocking the world, would potentially sign into law. 

The Joint California Policy Working Group on AI Frontier Models is headed by three academic AI researchers — including the “godmother of AI,” Dr. Fei-Fei Li, who lobbied against Sen. Wiener’s AI safety bill last year. Nonetheless, perhaps the most surprising aspect of the draft report released last week is how closely its policy principles echo the final version of SB 1047 vetoed by Gov. Newsom. 

Why the report is important

This is particularly important because Sen. Wiener introduced a placeholder AI safety bill this year, which is expected to be amended to include AI safety provisions derived from the governor’s working group report. Sen. Wiener himself sounded encouraged by the draft report’s content, saying in a statement, “The brilliant research and thoughtful recommendations laid out in this report build on the urgent conversations around AI governance we began in the legislature last year, providing valuable insight from our brightest minds on how policymakers should be thinking about the future of AI systems. . . . this report affirms with cutting edge research that the rapid pace of technological advancement in AI means policymakers must act with haste to impose reasonable guardrails to mitigate foreseeable risks, and that California has a vital role to play in the regulation of these systems.”

Act soon

The report cautions policymakers that the window to act will not remain open indefinitely. It’s not quite a call to act immediately, but it certainly indicates action sooner rather than later would be wise considering the pace of technological advancement. The report warns, “If those who speculate about the most extreme risks are right — and we are uncertain if they will be — then the stakes and costs for inaction on frontier AI at this current moment are extremely high.”

The report’s eight principles

The group’s draft report presents itself as an evidence-based framework for governing frontier AI. It primarily lays out eight core principles for AI governance that policymakers in the state should consider when crafting new laws and regulations. 

My key takeaways from these principles are: Regulating AI will be difficult but immensely important. Transparency should be your top priority, namely through whistleblower protections, third-party evaluations, and sharing information with the public. If a model developer has a major incident, that should be reported, and policymakers should respond. No current thresholds for which AI models should be regulated are perfect, but that shouldn’t stop you. 

Laid out one by one, here is a paraphrasing of each of the eight principles the report recommends policymakers adhere to. 

  1. Balance benefits and risks: Frontier AI offers transformative benefits in fields like agriculture, medicine, and education, but requires safeguards against potentially severe harms.

  2. Use diverse evidence sources: AI policy should draw on empirical research, technical methods, historical case studies, modeling, and simulations rather than waiting for observed harms.

  3. Recognize path dependency: Early design and governance choices create enduring technological trajectories, making early policy windows critical.

  4. Align incentives: Effective governance leverages industry expertise while establishing independent verification mechanisms.

  5. Prioritize transparency: Given current information deficits, greater transparency advances accountability, competition, and public trust.

  6. Protect information sharing: Whistleblower protections, third-party evaluations, and public information sharing are essential for transparency.

  7. Establish reporting systems: Adverse-event reporting enables monitoring post-deployment impacts and identifying necessary regulatory updates.

  8. Design appropriate thresholds: Policy interventions should be scoped based on clear governance goals, with mechanisms to adapt thresholds over time.

How they got there

Unsurprisingly for academics, the report’s authors relied on evidence through research and historical case studies. Particularly, “The foundations of the internet demonstrate how initial technical decisions can persist despite emerging risks, emphasizing the need to anticipate interconnected sociotechnical systems.” The report learns from the tobacco industry’s “suppressing independent research resulted in limited consumer choice and suboptimal social outcomes.” The report also examines the energy industry’s early documentation of the risks of climate change and highlights “the importance of independent assessment to avoid conflicts of interest.”

Transparency is key

If there’s one key theme of this report, it’s the importance of transparency. As we’ve discussed, transparency is the low-hanging fruit of AI regulation, a policy level that offers minimal harm to development while providing major benefits. The report states in its fifth principle that “greater transparency, given current information deficits, can advance accountability, competition, and public trust.” It follows this up with the sixth principle outlining specific policies to increase transparency: (1) whistleblower protections, (2) third-party evaluations, and (3) public-facing information sharing. 

The seventh principle of the report builds on the need for transparency by recommending the establishment of systems for reporting harmful incidents after AI deployment. Notably, these transparency measures are all elements of Sen. Wiener’s SB 1047 as well as New York Assemblymember Bores’ (D) recently introduced RAISE Act (NY AB 6453). The report argues that implementing industry-wide transparency standards would create a “race to the top” dynamic for AI safety practices. 

Thresholds are tough

Finally, the report acknowledges that defining in statute which models to regulate is a difficult task. “Thresholds are often imperfect but necessary tools to implement policy.” This is likely an ongoing task for policymakers, as the report states, “Given the pace of technological and societal change, policymakers should ensure that mechanisms are in place to adapt thresholds over time — not only by updating specific threshold values but also by revising or replacing metrics if needed.” 

Current legislation typically relies on computational costs (measured in FLOP) and company size (measured in dollars spent) as a threshold to define AI models covered by the regulations, but the report also examines using factors such as model capabilities or the number of users. 

What’s next?

Technically, this is only a draft report. The Working Group has requested outside feedback due April 8, 2025, after which the Group will incorporate the feedback into a final report. But keep an eye on Sen. Wiener’s SB 53. After praising the draft report, Sen. Wiener’s statement concluded, “My office is considering which recommendations could be incorporated into SB 53, and I invite all relevant stakeholders to engage with us in that process.” But just because his AI Working Group might approve of these measures, would Gov. Newsom sign such a bill?

Recent Developments

Major Policy Action 

  • Virginia: On Monday, Gov. Glenn Younkin (R) vetoed an AI regulatory bill (VA HB 2094) that would have made Virginia just the second state, following Colorado, to pass comprehensive AI legislation. In his veto statement, Gov. Youngkin cited the "rigid framework" of the bill that would put an "especially onerous burden on smaller firms and startups." However, the governor did sign into law a synthetic media fraud and defamation bill (VA HB 2124) and a bill to study autonomous vehicle use in the state (VA HB 2627). 

  • Kentucky: On Monday, Gov. Beshear (D) signed into law a bill (KY SB 4) to establish standards for government use of AI, require disclosures, and create an AI governance committee. 

  • Utah: On Tuesday, Gov. Cox (R) signed three AI bills into law. UT HB 452 puts disclosure requirements on mental health organizations using AI to communicate with customers. UT SB 180 requires police reports created by AI to include a disclaimer and human review. And UT SB 332 extends the sunset date of last year’s AI Policy Act until July 1, 2027. 

  • South Dakota: On Tuesday, Gov. Rhoden (R) signed a political deepfake bill (SD SB 164) into law, which prohibits the use of a deepfake with the intent to injure a candidate within 90 days of the election without a disclaimer.

  • Colorado: On Wednesday, the House approved a proposal (CO HB 1004) that would prohibit the use of rent-setting algorithms by landlords. A similar measure died in the Senate last year after an amendment was inserted that some claimed would undermine the law’s effectiveness. 

Notable Proposals 

  • Arkansas: Rep. R. Scott Richardson (R) has introduced a bill (AR HB 1876) that clarifies that content generated by an individual using generative AI, the generated content and the model training data belong to the individual.

  • California: A Senate bill (CA SB 503) was amended this week to include provisions establishing an advisory board related to the use of AI in health care services. The proposal would require developers of AI models to test for biased impacts in the outputs produced by the specified AI model or AI system based on the health facility’s patient population.  

  • New York: Last Friday, Sen. Lea Webb (D) introduced a bill (NY S 6748) that requires every newspaper, magazine, or other publication printed or electronically published to provide a conspicuous notice at the top of the page or webpage indicating that AI was used to create the article, photo, video, or other visual content. She introduced a similar measure last year that had a companion bill.

  • North Carolina: Senate Democrats introduced an AI safety bill this week (NC SB 735) that would require developers of AI systems to implement certain security protocols and be able to perform a complete shutdown of a model to protect against critical harms. The measure would also create liability for a developer for foreseeable critical harm from misuse or unintended uses, irrespective if the misuses involve fine-tuning.

Next
Next

Texas AI Bill 2.0: Private Sector Gets a Reprieve