Three Approaches to Regulating Artificial Intelligence

Key highlights this week:

  • We’re currently tracking 633 bills in 45 states (plus another 111 congressional bills) related to AI this year, 51 of which have been enacted into law. 

  • Gov. Polis signed Colorado’s landmark AI bill into law, although the governor included some reservations in his signing statement. 

  • The Senate approved California’s legislation that would create a new government agency to regulate large AI models and certify compliance.

  • Governors in Alabama, Minnesota, and Arizona enacted political deepfake bills into law, bringing the number of states with political deepfake laws to 16. 

  • South Carolina and Arizona enacted a sexual deepfake bill into law, joining 18 states with similar laws. 

State lawmakers have tried different approaches to regulate AI, hoping to balance “broad guardrails” with a “soft touch.” In a recent publication, Dean Ball, a Research Fellow at the Mercatus Center specializing in technology and innovation, introduced a framework for approaching AI regulation focused on (1) conduct, (2) use, and (3) the model. I found this approach helpful because I’ve struggled to differentiate between use and conduct regulations. So, as a useful exercise for myself, and hopefully for you too, I’ll explain each of these three regulatory approaches and flesh them out with examples of recent state legislation. 

We’ll start with what Dean calls conduct-level regulation. This is a “broadly technology-neutral approach” that utilizes or expands existing laws to enforce against undesirable conduct from AI. Essentially, “we recognize that murder is murder, theft is theft, and fraud is fraud, regardless of the technologies used.” The most obvious example at the state level is the category of sexual deepfakes. Producing and distributing a nonconsensual, sexual deepfake of an individual might already be considered a crime under state revenge porn laws or child pornography laws. However, lawmakers in 18 states have sought to remove any ambiguity by enacting new laws clarifying that AI-generated content is included under those crimes. 

Similarly, many anti-fraud AI bills fall under the conduct-level regulation approach. The first substantive provision of Utah’s AI Policy Act clarifies that the use of an AI system is not a defense for violating the state’s current consumer protection laws. The technology to photoshop a person’s face or imitate a voice has been around for a while, and history is rife with complex fraud schemes designed to steal innocent victim’s money and property. But AI makes all of that substantially easier to replicate on a vast scale, which means our current consumer protection laws will need to be enforced and updated to reflect this change. The conduct-level approach to AI regulation is focused on the outcome of AI use. 

The second approach is use-level regulation, which “creates regulations for each anticipated downstream use of AI.” For example, this is where we see a lot of the bills that attempt to regulate AI in certain industries: regulate AI in hiring, insurance, legal, medical, or classroom settings. This is also the approach the European Union has favored, and many state lawmakers have latched on to this approach. 

Utah’s AI Policy Act takes this approach as well by requiring specific categories of professions, including architects, certified public accounting firms, and a variety of medical professionals such as psychologists, nurses, and substance abuse counselors, to disclose if a person is interacting with generative AI when they are providing regulated services. The regulation is not designed to prohibit the actual conduct or result of these services but instead requires disclosure of the use of AI. Lawmakers are not necessarily trying to prevent a medical professional from, for example, using AI to identify tumors, but they do want to regulate (e.g., disclosure, opt-outs, bias audits) when AI is involved. 

Other examples of use-level regulation include proposals narrowly tailored for certain acts. Lawmakers have raised concerns about the use of AI in hiring, so a bill in New Jersey (NJ AB 3854) would regulate the use of automated employment decision tools during the hiring process. Other bills target the health care industry, like legislation in Illinois (IL HB 1002) that would require the disclosure of the use of algorithms to diagnose a patient with the opportunity to opt out. A pair of New York bills (NY AB 9473/SB 9434) would prohibit the use of an algorithmic device by a landlord to determine the rent amounts. Dean argues that the downside of the use-level approach is that “policymakers have to fret about every potential use case of a general-purpose technology,” which is a particularly difficult task for a technology changing as rapidly as AI. 

Finally, the most straightforward approach is model-level regulation, which “creates formal oversight and regulatory approval for frontier AI models.” The most robust state legislative proposal to regulate the model level is California’s SB 1047, which passed the Senate this week. That bill would create a new government agency to regulate large AI models and certify compliance. Colorado’s law also includes provisions that apply to AI developers — “a person doing business in the state that develops or intentionally and substantially modifies an AI system” — requiring them to make certain disclosures and documentation available to deployers, the attorney general, and the public. Developers must also use “reasonable care” to protect consumers from any known or “reasonably foreseeable” risks of algorithmic discrimination. The Colorado law stops short of establishing a new government agency to provide formal oversight of AI developers in the state, but the law charges the state attorney general with enforcing the new requirements of developers. 

Regulatory skeptics like Mercatus’ Dean Ball favor the conduct-level approach and argue that if a model-level approach is necessary, it should take place on the federal level instead of a patchwork of state laws (like how data privacy laws have developed over the past few years). A handful of state proposals, which we’ve dubbed “comprehensive” AI bills, encompass more than one of these approaches — Connecticut’s proposal (CT SB 2) took all three approaches, regulating the model, use, and conduct of AI. But as lawmakers react to anecdotes and hypotheticals of the outcomes of AI use, a growing number of state-level bills are approaching AI regulation from the use level. 


Recent Developments

In the News

  • AI Seoul Summit: On Tuesday, leading AI companies made voluntary safety commitments, including pulling the plug on their cutting-edge systems if they can’t rein in the most extreme risks. And world leaders agreed to build a network of publicly backed safety institutes to advance research and testing of the technology.

Major Policy Action 

  • Alabama: Last Thursday, Gov. Ivey (R) signed a political deepfake bill (AL HB 172) into law, which prohibits the distribution of materially deceptive media produced by AI likely to harm a candidate or affect the voting patterns of the electorate within 90 days of an election without a disclaimer.

  • Colorado: Last Friday, Gov. Polis (D) signed a comprehensive AI bill (CO SB 205) into law. The law places certain notification and reasonable care requirements on AI developers and deployers. Many of the law’s major provisions won’t go into effect until 2026. 

  • Minnesota: Last Friday, Gov. Walz (D) signed a political deepfake bill (MN HF 4772) into law, which amends the state’s laws on the use of deepfakes in elections. The bill replaces the current law’s standard of “reasonable knowledge” with a “reckless disregard” standard for whether the media is created by AI, applies the law to campaigning before primaries, nominating conventions, and during absentee voting, and requires a candidate convicted of violating the law to forfeit their nomination or office. 

  • Arizona: On Tuesday, Gov. Hobbs (D) signed a deepfake bill bill (AZ HB 2394) into law, which gives political candidates a cause of action for digital impersonation absent a disclaimer. The measure also allows a person depicted in a sexual deepfake to sue for injunctive relief and damages.

  • South Carolina: On Tuesday, Gov. McMaster (R) signed a sexual deepfake bill (SC HB 3424) into law, which adds images morphed by electronic means to provisions relating to child pornography.

  • California: On Tuesday, the Senate approved a model-level AI bill (CA SB 1047), which now goes to the Assembly for consideration. The bill would, among other things, create a new government agency to regulate large AI models and certify compliance. The vote in the Senate was 32 to 1, with nearly all Democrats and two Republicans voting in favor of the bill, one Republican voting against it, and the remaining Republicans not voting. 

  • Rhode Island: On Tuesday, the House passed a sexual deepfake bill (RI HB 7101) to criminalize the dissemination of sexual deepfake content, sending it to the Senate for consideration. Last week, the House passed a political deepfake bill (RI HB 7387), which would prohibit the distribution of deceptive political deepfakes within 90 days of an election without a disclaimer. 

Notable Proposals

  • New York: On Tuesday, Assemblyman Alex Bores (D) introduced a bill (NY AB 10364) that would require companies whose primary business purpose is related to AI to register with the Secretary of State for a biannual fee of $200. Failure to register could lead to fines of up to $10,000 and an injunction from selling products or services in the state for up to ten years.

  • North Carolina: On Wednesday, Rep. George Cleveland (R) filed a bill (NC HB 1072) that would require AI-generated political advertisements to include disclosure that it was created using AI, in addition to certain disclosure requirements. The Senate is currently considering a similar proposal (NC SB 880).

Previous
Previous

Lessons for AI from the Data Privacy Debate

Next
Next

AI Legislation, By The Numbers