The AI Balancing Act: States Torn Between Regulation and Innovation
Key highlights this week:
We’re tracking 835 bills in 48 states related to AI during the 2025 legislative session.
South Dakota sends a political deepfake bill to the governor.
A study committee bill is heading to the Mississippi governor’s desk.
Lawmakers in Texas will debate a bill to use AI to filter out objectionable content in school libraries.
When we previewed the 2024 legislative session, we noted the challenge lawmakers face in regulating artificial intelligence without stifling a nascent industry with tremendous potential. Fifteen months later, those tensions have become even more apparent. Efforts to reign in AI have been hampered by concerns about its effect on the economy, national security, and geopolitics.
When state lawmakers initially studied AI, many used the same refrain to describe their preferred approach — to set broad guardrails but use a light touch. However, Colorado remains the only state to have enacted a comprehensive AI law (CO SB 205), and when signed into law, Gov. Jared Polis (D) expressed strong reservations over the effect it would have on small business AI startups, writing “regulation that is applied at the state level in a patchwork across the country can have the effect to tamper innovation and deter competition in an open market.” Importantly, the Colorado law is not actually in effect yet, and won’t be until at least February 2026.
That tension played out last year in California as well, when Sen. Scott Wiener (D) pushed an AI safety bill through the legislature only to have it vetoed by Gov. Gavin Newsom (D). The governor had warned about overregulating an industry where 32 of the top 50 firms are based in California. Sen. Wiener has filed another AI bill this session (CA SB 53), but the initial draft has trimmed the sails significantly on transparency mandates for the leader AI developers, providing only whistleblower protections to AI employees. Sen. Wiener seems likely to wait for guidance from Gov. Newsom’s AI working group that is expected to release recommendations this summer.
The threat of a veto ultimately derailed efforts by Sen. James Maroney (D) last year to enact comprehensive AI regulation. After his measure passed the Senate, it never gained the support of Gov. Ned Lamont (D) and stalled out in the House. This year Sen. Maroney is back with a new bill (CT SB 2) meant to address concerns raised with his bill last year. But at a hearing earlier this month, his attempts to reign in AI were met with pushback from Dan O'Keefe, Lamont's commissioner of the state Department of Economic and Community Development.
"I think we're too early here," said O’Keefe in front of the General Law Committee. "It's super unclear to me why we want to be that early. I think our focus for now should be on the economic development elements of this. The workforce training elements of this. Let's be thoughtful as we understand what the impact of AI is before we attempt to regulate it."
Connecticut Governor Lamont has expressed his own reservations about AI regulation, particularly when it comes to vague obligations that are difficult to meet. The governor has his own bill (CT SB 1249) that takes an economic development approach to AI. The bill would create an AI regulatory sandbox program and an AI and Quantum Technology Investment Fund.
It is not just blue states wrestling with this tension. In Texas, Rep. Giovanni Capriglione (R) hoped to set a red state template for AI regulation with TRAIGA (TX HB 1709). Despite attempts to make the bill more industry-friendly, it has faced severe criticism from a coalition of free-market tech groups and we’re already hearing the bill may not be going anywhere. Rep. Brian Harrison (R) has filed his own bill (TX HB 3808) that would create an AI learning laboratory similar to the regulatory sandbox set up in Utah, and we’re hearing the House will instead focus on narrower bills with specific AI use cases.
In Virginia, the AI industry is waiting to see if Gov. Glenn Youngkin (R) will sign or veto an algorithmic discrimination bill (VA HB 2094) that the General Assembly passed ahead of a March 24 deadline. New York could also be a battleground for regulation versus economic development as Assemblyman Alex Bores (D) has introduced an AI safety bill (NY AB 6453) in a package of AI legislation. But he could face pushback from Gov. Kathy Hochul (D), who has touted a state investment of $90 million into an AI consortium and faces a tough re-election campaign in 2026.
Lawmakers advocating for comprehensive AI bills are facing similar headwinds in Arkansas (AR SB 258), Georgia (GA SB 167), Hawaii (HI SB 59), Iowa (IA HSB 294), Illinois (IL HB 3506 and IL SB 2203), Massachusetts (MA HB 94 and MA HB 97), Maryland (MD HB 1331 and MD SB 936), Nebraska (NE LB 642), New Mexico (NM HB 60), Nevada (NV SB 199), Oklahoma (OK HB 1916), and Vermont (VT HB 340 and VT HB 341) as well.
Looming over all of these states is the uncertain approach the White House and Congress will take toward AI. The Trump Administration has favored a laissez-faire attitude towards the technology, which is likely reinforced by the emergence of competition in the industry from China. As more states introduce AI-related bills, the divide between regulation and innovation will continue to shape legislative efforts, with the potential for federal preemption or industry self-regulation playing a crucial role in the future of AI governance.
Recent Developments
In the News
Tech Groups Decry NIST Cuts: On Monday, a group of tech trade associations sent a letter to Commerce Secretary Lutnick regarding job cuts at the National Institute of Standards and Technology (NIST), which houses the U.S. AI Safety Institute (US AISI). The group warned that “downsizing NIST or eliminating these initiatives will have ramifications for the ability of the American AI industry to continue to lead globally.”
Major Policy Action
South Dakota: On Tuesday, lawmakers sent a political deepfake bill (SD SB 164) to the governor for his signature to become law. The bill would prohibit the use of a deepfake with the intent to injure a candidate within 90 days of the election without a disclaimer.
Arkansas: On Thursday, sponsor Sen. Clint Penzo (R) stripped AI regulation provisions from his Arkansas Digital Responsibility, Safety, and Trust Act (AR SB 258), although those provisions could emerge as an amendment to another bill. The original version of the bill attempted to combine AI measures with comprehensive privacy protections but received pushback from Senate Transportation, Technology & Legislative Affairs Committee members earlier this month over concerns the bill was too unwieldy.
Colorado: A bill (CO HB 1212) to provide whistleblower protections for AI employees was amended to narrow the scope of the law to "large artificial intelligence developers.” It defines those developers as those with an employer with at least five independent contractors annually, that have trained a foundation model in the past five years with a computational cost of at least $20 million, and has trained one or more foundation models within any 12-month period in the past five years with a total computational cost of at least $100 million.
Mississippi: On Monday, lawmakers passed an AI study bill (MS SB 2426) to the governor. The bill would establish the Artificial Intelligence Regulation (Air) Task Force with annual reports due until the task force is dissolved in 2028.
Notable Proposals
Texas: On Tuesday, Rep. Hillary Hickland (R) introduced a bill (TX HB 4448) that would allow schools to use AI to filter out objectionable content in school libraries. The Senate is considering a measure that would give school boards more say in what books are in school libraries with new ways for parents to have books removed.
North Carolina: On Tuesday, Rep. Harry Warren (R) introduced a bill (NC HB 375)that would prohibit political deepfakes within 90 days of an election without a disclaimer and prohibits generated child pornography. The Artificial Intelligence and Synthetic Media Act would also prohibit disclosing a fabricated image of another person without consent, knowing it would cause harm to the depicted person, and would require certain disclosures for interactions with chatbots for certain regulated industries and occupations like Utah’s law.