State AI Policy: 2025 Preview
Key highlights this week:
We’re currently tracking 692 bills in 45 states related to AI this year, 111 of which have been enacted into law.
The AI policy community speculates what a post-election world looks like for AI.
Meanwhile, study committees hear testimony on AI in Georgia and release a report with policy recommendations in Kentucky.
When President-elect Trump returns to the White House in January, he has vowed to repeal President Biden’s executive order on artificial intelligence and many expect him to take a more laissez-faire attitude toward AI regulation. Congressional leaders have been negotiating a potential AI regulation bill for the lame-duck session, but with less than two months remaining and a government funding bill looming, the odds of any comprehensive regulation of the technology passing are slim. That will continue to leave policymaking to states, which already saw a four-fold increase in AI-related legislation in 2024.
This year the dominant approach to addressing AI was a conduct-based approach. If distributing nonconsensual sexual images of someone is already illegal, then lawmakers ensured that it’s also illegal to do so using AI-generated images. California’s Sen. Wiener focused on regulating the AI model itself, to pressure model developers to adopt broader safety precautions to avoid catastrophic disasters. But after CA SB 1047’s failure, we expect state lawmakers to double down on regulating the use of AI itself. Mercatus’ Dean Ball describes a use-based approach as creating “regulations for each anticipated downstream use of AI.”
Colorado’s law, which won’t go into effect until 2026, focuses on “reasonably foreseeable” risks of algorithmic discrimination and requires deployers of AI-technology to implement a risk management policy and program and complete an impact assessment of the system. Colorado lawmakers have already acknowledged the need for changes to their first-of-its-kind AI law. While bigger, established tech companies have suggested minor tweaks, smaller startups have expressed frustration over compliance provisions, particularly proactive disclosure to consumers. Both will push lawmakers to narrow the law to apply to higher-risk uses and allow freer general use of AI with fewer restrictions.
Connecticut Sen. James Maroney (D) has vowed to return next year with stronger legislation after his bill stalled in the House after Gov. Ned Lamont (D) threatened a veto. Sen. Maroney is committed to a “risk-based approach” mitigating the harms from AI that makes consequential decisions. But Gov. Lamont’s reservations were that the bill would make the state an outlier on AI regulation that would be hostile to startups, and he argued for a multi-state approach. Meanwhile, Sen. Maroney has led a group of lawmakers from around the county to share ideas on artificial intelligence policy. Those members are expected to spearhead a multistate effort to regulate the technology using the Colorado law (CO SB 205), Connecticut’s proposal (CT SB 2), and the recent Texas draft bill as models for new bill introductions across the county come January. These will be the primary vehicle for our anticipated spread of the use-based approach to AI regulation.
However, regulating the model directly will still be a hot topic in 2024. Notably, the protection of personal data used as inputs into models has become an increasing concern. California enacted a bill (CA AB 2013) that requires disclosure of datasets used to train models. And another try at the model safety approach is not out of the question either. After CA SB 1047 fell short this year, Sen. Scott Wiener (D) has indicated he is working with the bill’s opponents to craft something that can pass next session. But he’ll need to get Gov. Gavin Newsom (D) on board after the governor vetoed this year’s proposal. California lawmakers could come back next year even more determined to pass AI legislation as a counterweight to the Trump Administration, with Gov. Newsom angling for the role of national opposition leader with an eye on a 2028 presidential run. And other states could pick up the mantle — the Transparency Coalition is pushing model legislation based on the California bill in half a dozen states, including Washington.
And don’t expect all those deepfake bills to fade away in 2025. Deepfakes continue to be a pressing concern, particularly with the proliferation of sexual deepfake content in high schools. Texas lawmakers have already prefiled several deepfake bills for next session, including one proposal to require age verification for deepfake generators (TX HB 421), and another (TX HB 1121) that would create liability for a software or app developer that fails to take reasonable precautions to prevent nonconsensual sexual deepfakes that depict an actual person.
New York Assemblymember Alex Bores (D) plans to introduce legislation requiring labeling of deepfake content using a global provenance standard known as C2PA. His proposals would require social media companies, political campaigns, generative AI content creators, and government agencies to embed C2PA in content to help determine authenticity. He is also working on a bill that would hold AI companies strictly liable for certain harms.
Other newer issues that could crop up include regulation of the use of AI in insurance and health care, prohibitions on using algorithms and competitor data to set rent prices, consumer protections against the use of AI in fraud, and requirements to disclose the use of AI in media.
This year’s state-level legislative efforts mark just the beginning of a multi-year journey to establish a balanced regulatory framework for artificial intelligence, one that can adapt to evolving technologies and shifting political landscapes.
Recent Developments
Major Policy Action
California: The California Privacy Protection Agency voted to formalize rulemaking for draft regulations for automated decisionmaking technology last week. The regulations require notice to the consumer about the use of the technology with a right to access information about the use and a limited right to opt out.
Georgia: Last week, the House and Senate committees on artificial intelligence heard testimony on the use of AI for public safety at a joint meeting. The committees are scheduled to present recommendations on AI-based legislation early next month.
Kentucky: On Wednesday, the Special Committee Artificial Intelligence Task Force released its findings with eleven recommendations, including encouraging the responsible use of AI in elections and requesting the attorney general to review laws related to using someone’s likeness without permission. The recommendations were sent to legislative leaders and the Legislative Research Commission.
Notable Proposals
Pennsylvania: Last week, Rep. Johanny Cepeda-Freytiz (D) proposed legislation (PA HB 2660) that would require content created using artificial intelligence to have a watermark on 30 percent of the content with 50 percent opacity. The measure has 14 co-sponsors, all Democrats, and was requested by students at Wilson West Middle School in Berks County.
Ohio: On Tuesday, Sen. Louis Blessing (R) introduced a bill (OH SB 328) that would prohibit the use of a pricing algorithm that uses nonpublic competitor data. New Jersey lawmakers are considering similar legislation regarding rent-setting algorithms and there are similar bills in Illinois and New York.