Trendsetter Alert: California's 33 New AI Bills Explained
Key highlights this week:
We’re tracking 737 bills in 47 states related to AI during the 2025 legislative session.
A hearing over Connecticut’s algorithmic discrimination bill turned contentious with pushback from the governor’s office. We have a new resource page you should bookmark to track the 25 similar bills introduced in 15 states.
Lawmakers in Vermont introduced a package of bills on AI this week.
New York Governor Hochul announced the Empire AI consortium has received $165 million in new funding and gained three new university participants.
California lawmakers enacted nearly 20 AI-related bills last year, although Gov. Gavin Newsom ultimately vetoed the most high-profile bill — Sen. Scott Wiener’s AI safety proposal (CA SB 1047). This year lawmakers in California have returned with even more ideas on how to regulate AI, introducing a flurry of bills ahead of last Friday’s bill introduction deadline. Most of the proposed bills have a narrow scope but take novel approaches to addressing some of the concerns raised by the emerging technology.
Before we dig into some of the 33 AI-related bills lawmakers in California have introduced this session, note that California is a full-time legislature and the real action on many of these bills won’t fully get going until they approach the cross-over deadline in June with the session stretching into the fall. Most states have part-time legislatures, which is why you’re seeing such a flurry of activity. Most of these states will be adjourned for the year by May or June (Virginia is already adjourned sine die), so they need to act faster than states like California or New York with full-time legislatures. But California is also a trendsetter, and the bills introduced in the Golden State this year will likely be debated in those quick legislative sessions of other states in 2026. And that’s why it’s beneficial to take a closer look at the various AI-related bills lawmakers in California have introduced this year.
Artificial intelligence models are trained on vast reams of data, many of which are from publicly available sources. In many cases, the data is purchased from brokers who have collected information from consumers who never consented to such use. Senator Josh Becker (D) introduced a bill (CA SB 468) that would impose a duty on an AI deployer to protect personal information. The measure outlines comprehensive information security program requirements a deployer must meet to avoid a violation under the Unfair Competition Law. Attorney General Rob Bonta (D) issued an advisory last month that reminded AI developers and users that the collection and processing of consumer data could be subject to provisions in the California Consumer Privacy Act.
AI developers have also been accused of using copyrighted material to train models, which has led to lawsuits from major publications. Assemblymember Rebecca Bauer-Kahan (D) has introduced a measure (CA AB 412) that would require the developer of a generative AI system to respond to requests from copyright holders inquiring about any copyrighted material used to train models. Last session she introduced a proposal (CA AB 3204) that would have protected personal information used by AI models by requiring “data digesters” to register with the California Privacy Protection Agency.
Children’s online protection has been a trending topic over the last two sessions, extending to artificial intelligence usage. Assemblymember Bauer-Kahan has introduced the Leading Ethical AI Development (LEAD) for Kids Act (CA AB 1064), which would require the developer of an AI system likely to be used by kids to register the product with a LEAD for Kids Standards Board. Developers would have to create an AI information label for the product and take steps to ensure children cannot access an AI system that poses a prohibited risk. The measure also requires consent to train an AI system with the personal data of a child.
Assemblymember Bauer-Kahan also has a bill (CA AB 1018) that would regulate the use of automated decision systems used to make consequential decisions similar to a measure she introduced but failed to pass last session (CA AB 2930). The proposal would require deployers to conduct performance evaluations, subject the system to third-party audits, and provide certain disclosures to the subject of a consequential decision with an opportunity to opt out.
Chatbots specifically have drawn increased scrutiny as their use has become more pervasive. California has a law, enacted in 2018, that requires disclosure of interactions with bots, but only if the content is to incentivize a purchase or influence a vote in an election. A new bill (CA AB 410) would require disclosure of any interaction with a bot whatsoever. Another bill (CA AB 489) would prohibit a bot from implying it has required licensure to practice health care.
The use of rent-setting algorithms has become a popular target for lawmakers this session, but those concerns have extended to other price-setting algorithms. Two bills (CA SB 259 and CA SB 295) would prohibit algorithms that use competitor data or “affinity-based algorithmic pricing,” a method differentiating pricing for a group of consumers based on personal data.
Other bills introduced in February would address AI used to write police reports (CA SB 524), prohibit AI from being used to plan critical infrastructure (CA SB 833), and create a “mental health and artificial intelligence working group” (CA SB 579). There are also a number of “shell bills” to address artificial intelligence that will likely be gutted and amended to provide substantive text.
One of those shell bills is highly anticipated to turn into this year’s version of Sen. Wiener’s AI safety legislation, which he has pledged to return with this session after his proposal was rejected last year. Gov. Newsom formed an AI working group to make recommendations this summer. Sen. Wiener may wait until then to gain a better understanding of how to craft a bill that can gain approval, telling reporters back in January “Once those recommendations come out, we’re going to be taking a close look at them, and we could potentially include some or all of those recommendations in our bill.” We expect this bill to once again be the most high-profile AI legislative proposal in the states later this year.
Recent Developments
In the News
Claude 3.7: That’s right, another week and another AI model release. On Monday, Anthropic released its new AI model Claude 3.7 Sonnet. Interestingly, the model is both an upgrade to the underlying model as well as an introduction to a “reasoning” model that has been all the rage these past few months. Depending on the question asked, the model will decide whether to use a traditional model for a straightforward answer or to spend a little extra time “thinking” for a more elaborate response.
Major Policy Action
Connecticut: A hearing over the comprehensive AI proposal (CT SB 2) pushed by Sen. James Maroney (D) turned contentious with pushback from Gov. Ned Lamont’s administration and House Speaker Matt Ritter (D). Dan O'Keefe, commissioner of the state Department of Economic and Community Development argued the bill would make the state an outlier and that the “focus for now should be on the economic development elements of this,” adding “Let's be thoughtful as we understand what the impact of AI is before we attempt to regulate it."
Georgia: A proposed bill (GA SB 37) to guide state government use of AI did not receive a committee vote this week after Georgia Republican Congressman Rick McCormick asked the panel not to take action and to instead wait on federal action. “I’m not saying we’re not going to pass it because of what Congressman McCormick said,” said Sen. Brandon Beach (R), “but I do want to talk to you offline, because I do have some concerns that if each state does something different, do we have any continuity, if you will, from the overall United States?”
New York: Last Friday, Gov. Kathy Hochul (D) announced the Empire AI consortium has received $165 million in new funding and gained three new university participants. More than $400 million in funding has been contributed by the state and other consortium members since the project was announced last fall.
Utah: The House passed a measure (UT HB 452) that would require disclosure to a consumer that a mental health chatbot is not a human and prohibits the chatbot from selling or sharing individually identifiable health information or user input with a third party. The bill would also prohibit advertising a specific product in a conversation unless it is disclosed as an advertisement.
Notable Proposals
Missouri: Rep. Phil Amato (R) introduced a bill this week (MO HB 1462) that would clarify AI is not sentient and that harm caused by an AI system's operation, output, or recommendation is the responsibility of the owner or user. The measure would also allow courts to pierce the corporate veil to hold a company accountable for harm caused by AI.
Nevada: A bill (NV AB 271) introduced this week would prohibit any equipment used for voting, ballot processing, or ballot counting from using artificial intelligence. Secretary of State Cisco Aguilar (D) has raised concerns about the use of AI in elections, although primarily regarding political deepfakes.
Pennsylvania: A bipartisan group of senators introduced a measure (PA SB 355) to prohibit using bots to purchase tickets for resale on the secondary market. This would align state law with a federal law outlawing the use of bots for ticket resale, and create a civil action for damages.
Vermont: House members introduced a package of bills on AI this week, many of which are still being drafted, including measures to protect against bias by automated decision systems (VT HB 340), a proposal to regulate inherently dangerous artificial intelligence systems (HB 341), require registration of AI systems (HB 365), prohibit dynamic pricing (HB 371), protect against digital replicas (HB 387), and regulate rent-setting algorithms (HB 389).