States Forge Ahead: An Update on Comprehensive AI Bills

Key highlights this week:

  • We’re currently tracking 581 state bills (plus another 101 congressional bills) related to AI this year. 

  • Alabama lawmakers sent two sexual deepfake bills to the governor for her signature to become law. 

  • Colorado and Minnesota both move political deepfake legislation. 

  • The AG’s office in Massachusetts issued a legal advisory indicating that consumer protection laws apply to the  developers, suppliers, and users of AI technology.

  • Finally, legislative committees in California approve a handful of AI related bills, including a comprehensive AI bill, which is the subject of this week’s deep dive. 


Congressional inaction on artificial intelligence has left a vacuum for states to fill. State lawmakers have been reluctant to regulate the nascent industry enough to stifle innovation but have also expressed a desire to act quickly and offer broad guardrails to keep the technology from causing harm to consumers. A few states have proposed landmark comprehensive AI legislation that could provide an early template for others to follow. With sessions winding down in many states, some of those bills are slowly traveling down the legislative process, already with many changes as industry provides feedback. Today, we’ll provide an update on where things stand with a few key comprehensive AI bills. 

California lawmakers have introduced a spate of AI legislation, but one bill introduced in February (CA SB 1047) is an attempt to regulate all large AI models under a new government agency — the Frontier Model Division. This week, the Senate Committee on Government Oversight amended the bill and scheduled a hearing for April 23. Now known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the title was renamed to replace “Systems” with “Models”), the bill amends how a developer certifies a model is safe for use. Instead of asking developers to make a “positive safety determination” about a covered model, the bill now provides for a “limited duty exemption” when the developer can reasonably exclude the possibility that a covered model has a hazardous capability when accounting for a reasonable margin for safety and the possibility of post-training modifications. 

The bill also adds a requirement that developers must detail how testing procedures address the possibility that a covered model could be used or altered in a way that would be dangerous. Lawmakers amended the bill to allow the attorney general to pursue punitive damages, and subject certification submissions to the penalty of perjury. The bill now includes a process for whistleblowers employed by the developer to disclose the model is out of compliance. Finally, it tasks the Frontier Model Division with offering technical guidance to developers. California is a full-time legislature that typically is in session most of the year, so lawmakers in the Golden State have more time to move comprehensive AI legislation through the process than some of their peers in part time legislatures. 

For several months, Connecticut Sen. James Maroney (D) chaired a task force on AI that resulted in much-anticipated legislation (CT SB 2) introduced in February. After considering feedback from stakeholders, lawmakers amended the bill in March to clarify the roles of certain developers and deployers of AI systems and added the Commissioner of Consumer Protection to the oversight process. This week, lawmakers walked back some of those changes with an amendment on the floor by voice vote, sending the bill to the Judiciary Committee. The amendments remove the Commissioner from oversight and leave enforcement to the Attorney General’s office. The amended bill also further clarifies obligations for developers and deployers of AI systems, and requires only developers of certain AI systems that produce synthetic digital content disclose that the content has been manipulated, not deployers. 

The amended bill adds a provision to allow consumers to appeal adverse consequential decisions made by AI and modifies the definition of “consequential decision” to include criminal sentencing and plea analysis and include decisions made on utilities. The definition of “high risk artificial intelligence system” was amended to exclude certain systems intended to perform narrow procedural tasks, improve results of an activity by an individual, or detect decision-making patterns, or certain specified technologies such as anti-malware, anti-virus, calculators, databases, data storages, firewalls, Internet domain registrations, Internet-web-site loading, networking, robocall-filtering, spam-filtering, spell-checking, spreadsheets, web-caching, or web-hosting. 

Senate leaders sound confident in getting the bill through their chamber intact, despite objections by industry. However, the bill’s support in the House or with Gov. Ned Lamont (D) is less clear. And proponents of comprehensive AI legislation are running up against a scheduled May 8 adjournment date for the Connecticut legislature. 

Colorado Senator Robert Rodriguez (D), who sat on the multi-state AI study group with Connecticut Sen. Maroney, introduced an AI bill (SB 205) last week, which is similar to Connecticut’s proposal with obligations on developers and deployers of certain AI systems. Developers would have to ensure that general purpose models that create synthetic digital content would be detectable as synthetic content, while deployers would be required to disclose to a consumer that the synthetic digital content has been artificially generated or manipulated. The law would be enforceable by the attorney general and district attorney, with a 60-day right to cure in the first year of implementation. The bill will have a hearing on April 24 by the Senate Judiciary Committee. But similar to Connecticut, supporters of comprehensive AI legislation in Colorado will need to move quickly, with the Colorado legislature set to adjourn for the year on May 8. 

Utah is the only state to enact legislation (UT SB 149), signed into law last month, that could be classified as a comprehensive AI law, although it is much smaller in scope than proposals outlined above. The Utah law creates a new government agency to administer an artificial intelligence learning laboratory program, establishes liability for AI that violates consumer protection laws, and requires disclosures to consumers for interactions with generative AI for certain regulated industries.

Will we see a comprehensive AI law passed this year that’s broader in scope than the Utah law? We might soon find out. Eyes will be on Connecticut to see if they can move forward with their proposal before adjournment in a few short weeks. After that, the proposals in California are likely to get much of the attention into the summer. 

Recent Developments

In the News

  • Meta’s Llama 3: On Thursday, Meta released the latest version of its open AI model called Llama 3. The model is expected to be on par with GPT-3.5 and seeks to improve multi-step tasks and refuse fewer requests. The new model will power the just announced Meta AI family of smart assistants that are now available in Instagram, Facebook, WhatsApp, and Messenger.

  • State Solidarity on Comprehensive AI Legislation: On Thursday, lawmakers in several states held a press conference to discuss their efforts to pass AI legislation. They highlighted the battle between industry groups and opponents, including civil rights groups, labor unions, and consumer advocacy groups. 


Major Policy Action 

  • Alabama: On Tuesday, lawmakers sent two sexual deepfake bills to the governor for her signature to become law. The first bill (AL HB 161) would make it unlawful to create or alter  artificially-generated sexual images without consent if a reasonable person would believe it actually depicts an identifiable individual, and another bill (AL HB 168) would add digitally created or altered visual depictions to child pornography laws. Both bills exempt from liability Internet service providers, search engines, and cloud service providers if they simply provide access to the internet for such content. 

  • Massachusetts: On Tuesday, the Attorney General’s office issued a legal advisory indicating that consumer protection laws apply to the developers, suppliers, and users of AI technology. The advisory stresses that it is unfair or deceptive to falsely advertise the quality of AI systems, supply an AI system that is defective, unusable, or impractical for the purpose advertised, or misrepresent the performance of the system. The advisory also warns that using deepfake audio or video content for the purpose of deceiving another can constitute fraud.

  • Colorado: On Thursday, a Senate Committee amended a political deepfake bill (CO HB 1147) to change the standard for someone using deepfakes in election materials from “actual malice” to “reckless disregard.” The amended measure also applies a timeframe prohibiting political deepfakes without a disclaimer to within 60 days of a primary election or 90 days within a general election. The House approved the bill last month. 

  • Minnesota: On Thursday, the Senate passed an elections bill (MN HF 4772) that includes amendments to the state’s laws on the use of deepfakes in elections. The bill replaces the current law’s standard of “reasonable knowledge” with a “reckless disregard” standard for whether the media is created by AI, applies the law to campaigning before primaries, nominating conventions, and during absentee voting, and requires a candidate convicted of violating the law to forfeit their nomination or office. The House still needs to review the Senate changes before it heads to the desk of Gov. Tim Walz (D). 


Notable Proposals

  • Louisiana: On Monday, Rep. Michael Melerine (R) introduced a resolution (LA HCR 66) that would create a joint legislative committee to study regulations regarding artificial intelligence to exploit the technology while limiting risks. Last year, the legislature created the Joint Legislative Committee on Technology and Cybersecurity to study the impact of artificial intelligence on operations, procurement, and policy.

  • New York: On Monday, Governor Hochul (D) announced a budget agreement with lawmakers that includes the establishment of “Empire AI,” a consortium that will create a state-of-the-art artificial intelligence computing center in Buffalo to be used by New York colleges and universities. The budget would allocate $275 million for Empire AI, and $125 million in private and university funding for artificial intelligence development. The budget is also expected to include provisions on political deepfakes.

Previous
Previous

Connecticut's AI Adventure: Senate Passes Amended SB 2

Next
Next

Can State Laws Actually Stop Political Deepfakes?