Dozens of AI Laws Go Into Effect
Key highlights this week:
We’re currently tracking 673 bills in 45 states related to AI this year, 72 of which have been enacted into law.
Louisiana’s governor vetoed two bills lawmakers had passed to limit deepfakes, citing the First Amendment. Wyoming lawmakers express similar misgivings during a committee hearing.
Lawmakers in Georgia and Kentucky held AI study meetings.
And dozens of AI laws enacted earlier this year went into effect on July 1, which are the focus of this week’s deep dive below.
Earlier this spring, state lawmakers began addressing artificial intelligence in public policy, passing legislation on deepfakes, guiding AI policies in state government and schools, and even a few comprehensive regulatory bills. This summer, over two dozen of those bills went into effect, and we will see how some of these initial attempts at imposing guardrails on the new technology will play out, and what unintended consequences might arise.
Legislatures in all but seven states have adjourned their regular session for the year, having passed over 70 AI-related bills so far. While Utah’s major AI regulation law went into effect back in May, and Colorado’s comprehensive bill won’t take effect until 2026, many of the narrowly focused bills that passed earlier this year went into effect on July 1.
Unsurprisingly, deepfakes were the most popular AI-related issue lawmakers addressed this year. With election season upon us, lawmakers sought to address the potential proliferation of deepfake content in political ads meant to mislead voters. Some states already had political deepfake restrictions on the books, but starting July 1, new laws in Colorado (CO HB 1147), Florida (FL HB 919), and Mississippi (MS SB 2577) require disclaimers on deepfake political content to avoid civil liability. The Colorado law applies to content 60 days before a primary or 90 days before a general election, Mississippi’s applies to content 90 days before an election, and Florida’s law applies to political content at any time. Minnesota (MN HF 4772) tweaked its political deepfake law so that it applies during the absentee voting period, uses a “reckless disregard” standard on those disseminating content, and exempts broadcasters and cable television systems.
Lawmakers also sought to remove any ambiguity in prosecutions over sexual deepfake content.
New laws in Idaho (ID HB 465), Mississippi (MS HB 1126), South Dakota (SD 79), and Tennessee (TN HB 2163) add synthetic content to laws against child sexual depictions. Starting on July 1, it is now a crime in Idaho (ID HB 575) and Iowa (IA HF 2240) to use explicit synthetic media to harass or humiliate someone or use the media for blackmail purposes.
Tennessee’s ELVIS Act (TN HB 2091) also went into effect this month. The law is the first of its kind to protect music performer voices from the misuse of artificial intelligence, giving every individual a property right in the use of their name, photograph, voice, or likeness.
While attempting to limit the potential downside of AI, lawmakers also want to fully partake in the technology’s promises to unleash new innovations and don’t want their state to be left behind. This month, new laws went into effect that integrate AI into education materials and provide incentives for small business innovation. In Connecticut, a new law (CT HB 5524) creates an AI pilot program to award grants to school boards to help educators and students use AI in the classroom and offers professional development for educators. A Florida law (FL HB 1361) creates grants to school districts to implement AI in support of students and teachers.
Maryland lawmakers (MD HB 582/SB 473) established the Pava LaPere Innovation Acceleration Grant Program and the Baltimore Innovation Initiative Pilot Program to incentivize small business innovation in technology with a preference for products that integrate AI into health care and biotechnology.
Finally, states have looked inward to make sure they are using AI properly and effectively. Hawaii lawmakers created a two-year program (HI SB 2284) to develop a wildfire forecast system using artificial intelligence after the devastating Maui wildfires in 2023. And a Maryland law effective this month (MD HB 1271/SB 818) requires agencies to conduct inventories and assessments on AI systems with AI policies required at universities and colleges.
As states navigate the complexities of regulating artificial intelligence, the implementation of these new laws will serve as a critical litmus test for future legislative efforts. Policymakers and the public alike will closely monitor their effectiveness and unintended consequences, setting the stage for a more informed and adaptive approach to AI governance in the coming years. The experiences and lessons learned from these pioneering states will undoubtedly shape the next wave of AI policy, striving to balance innovation with responsible oversight.
Recent Developments
In the News
Microsoft & Apple Drop OpenAI Board Seats: Both Microsoft and Apple have reportedly decided against adding participants to OpenAI’s Board as non-voting observers. Media speculation ties this decision to anti-trust pressure from European regulators.
Major Policy Action
Louisiana: Last week, Gov. Landry (R) vetoed two deepfake bills — a political deepfakes bill (LA HB 154) and another that would have required watermarks to identify deepfakes (LA SB 97). “While I applaud the efforts to prevent false political attacks, I believe this bill creates serious First Amendment concerns as it relates to emerging technologies,” the governor wrote in his veto message. “The law is far from settled on this issue, and I believe more information is needed before such regulations are enshrined into law.”
Georgia: Lawmakers on the Senate Study Committee on Artificial Development held their first meeting in late June with plans for another meeting on July 17 at Georgia Tech. The panel plans to hold seven or eight hearings this summer and fall, including one in Augusta, home to the Georgia Cyber Innovation & Training Center, with a deadline for legislative recommendations set for December 1.
Kentucky: On Tuesday, lawmakers on the General Assembly’s Artificial Intelligence Task Force held their first meeting, listening to presentations about the history of AI, the ways state governments use the technology, and about legislation in other states. The next meeting is scheduled for August 13.
Wyoming: The Select Committee on Blockchain, Financial Technology and Digital Innovation Technology met last week to address a bill (SF 51) on the dissemination of misleading synthetic media, with lawmakers on the panel expressing concerns the bill is too broad and could infringe on free speech protections. The committee has suggested creating an exemption for satire and parody, and changing the penalties from criminal and civil, amendments that could be taken up at the next meeting on September 16-17 at the University of Wyoming.
Utah: The Commerce Department officially opened its Office of Artificial Intelligence Policy, although it had been in operation in May. The office was created from legislation passed this spring, as well as an AI learning lab that opened this week that gives businesses a regulatory sandbox to innovate without fear of heavy-handed restrictions.