Beyond Deepfakes: Utah's AI Law Tackles Broader Issues

Key highlights this week:

  • We’re currently tracking 595 bills in 45 states (plus another 103 congressional bills) related to AI this year. 

  • Governors in Florida and Mississippi signed political deepfake bills into law. 

  • Mississippi also enacted a law to protect children from sexual deepfakes

  • After the Connecticut Senate passed a comprehensive AI bill last week, Senators in Colorado followed suit today by approving its own comprehensive AI bill that’s modeled closely on the Connecticut bill. 

  • And Utah’s AI Policy Act went into effect on Wednesday, which is the focus of this week’s deep dive. 


This week, Utah became the first state to have a comprehensive AI law go into effect, mandating disclosure requirements for professionals using AI to interact with consumers. Lawmakers enacted Utah’s AI Policy Act as part of a package of AI bills this year that we think marks a middle ground for policymakers in other states to follow.  While we have seen many states take action on deepfakes this year, this is the first law establishing requirements for businesses that may have already incorporated generative AI into their daily business activities. 

In March, Utah Governor Spencer Cox (R) signed the Artificial Intelligence Policy Act (UT HB 149) into law. The act imposes new requirements that will impact a substantial number of businesses and professions in Utah. The new law, which took effect on May 1, focused on consumer protection and transparency measures. First, the act’s language clarifies that the use of an AI system is not a defense for violating the state’s current consumer protection laws. Going a step further, the law mandates proactive disclosures for one group of professionals licensed with the states and reactive disclosures for another group of professionals. 

More specifically, the law requires individuals who are in a profession regulated by the Utah Department of Commerce to prominently disclose if a person is interacting with generative AI when they are providing regulated services. These occupations include architects, certified public accounting firms, and a variety of medical professionals such as psychologists, nurses, and substance abuse counselors. Additionally, individuals in these professions can only provide regulated services using AI if the rules and regulations for their profession allow them to do so. 

For the second group, the law requires any person who uses, prompts, or otherwise causes generative AI to interact with a person in connection with any activities regulated by the Division of Consumer Protection to clearly and conspicuously disclose that an individual is interacting with AI if asked. Furthermore, if a person fails to disclose that an individual is interacting with AI when asked, it will be considered a violation of Utah’s consumer protection laws, and the Division of Consumer Protection may impose a fine of up to $2,500 per violation and can also file an action in court to order enforcement. 

In addition to these new disclosure requirements, the second part of the law establishes the Office of Artificial Intelligence Policy. The Office is tasked with consulting stakeholders about potential regulatory proposals and creating and administering an AI Learning Laboratory Program. The AI Learning Laboratory Program is tasked with analyzing and researching the risks, benefits, and impacts of AI to help inform the state’s evolving AI regulatory framework.

Individuals who participate in the Learning Laboratory and want to utilize AI technology in Utah can apply to the Office of AI Policy for a regulatory mitigation agreement. Regulatory mitigation agreements must specify limitations on the scope and use of the participants' AI technology, including the number and types of users, geographic limitations, and safeguards that will be implemented. To be eligible to participate in a mitigation agreement, participants must demonstrate to the office that they have the technical capability and responsibility to develop and test the proposed AI technology, have sufficient financial resources to meet obligations during testing, demonstrate that AI technology provides potential benefits to consumers that may outweigh any identified risks, have a plan to monitor and minimize any risks identified, and demonstrate to the office that the scale, scope, and duration of the testing is appropriately limited based on risk assessments. 

Utah’s AI Policy Act offers broader consumer protections than the more narrowly focused deepfake bills we’ve seen enacted in dozens of states this year. And while the Utah law concentrates on regulating the use of AI tools, and is limited to transparency requirements, it’s the first bill we could consider comprehensively addressing AI policy to make it across the finish line and go into effect in a state. More aggressive legislation aimed at regulating the development and deployment of frontier models is still making its way through the legislative process in Connecticut, Colorado, and California (although running out of time in the former), but business leaders and policymakers should keep a close eye on how enforcement of the AI Policy Act unfolds in Utah.  

Recent Developments

In the News

  • NIST Launches GenAI: On Monday, the National Institute of Standards and Technology (NIST) launched a challenge series that will support the development of methods to distinguish between content produced by humans and content produced by AI. NIST GenAI, spurred by President Biden’s AI Executive Order, invites teams from academia, industry, and research labs to submit either “generators” — AI systems to generate content — or “discriminators,” which are systems designed to identify AI-generated content.

  • DHS Announces Tech Heavy AI Board: Last Friday, the U.S. Homeland Security Department announced a 22-member board that includes the CEOs of OpenAI, Microsoft, Alphabet, and Nvidia that will advise the government on the role of artificial intelligence on critical infrastructure. The board will develop recommendations for the transportation sector, pipeline and power grid operators, internet service providers, and others to "prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety." The board will meet for the first time next month with quarterly planned future meetings.

Major Policy Action 

  • Florida: Last Friday, Gov. DeSantis (R) signed bills addressing political deepfakes and studying AI into law. The political deepfake law (FL HB 919) requires political advertising to include a disclaimer if it uses generative AI that appears to depict a real person performing an action that did not occur in reality and was created with intent to injure a candidate or to deceive regarding a ballot issue. Provides for civil and criminal penalties. The study bill (FL SB 1680) creates the Florida Government Technology Modernization Council to study and monitor the development and deployment of AI systems. 

  • Mississippi: On Tuesday, Gov. Reeves (R) signed both a sexual deepfake bill and a political deepfake bill into law. The political deepfakes bill (MS SB 2577) creates the crime of wrongful dissemination of digitizations, known as deepfakes, if it takes place within 90 days of an election, without consent, with the intent of affecting the election. The new law does provide that including a disclaimer is a defense to prosecution. The sexual deepfake bill (MS HB 1126) adds computer-generated images depicting minor children in an explicit nature to child exploitation provisions.

  • Hawaii: On Wednesday, the legislature sent a political deepfake bill (HI SB 2687) to the governor for his signature to become law. The bill would prohibit the distribution of political deepfake content without a disclaimer from February until election day in even-numbered years. The law would also provide a civil action for depicted individuals in such deepfake content and if signed by the governor, would take effect immediately. 

  • Colorado: On Friday, the Senate passed a comprehensive AI bill (CO SB 205), sending it to the House for consideration. This came shortly after Senators amended the bill to clarify definitions, remove provisions requiring labeling of synthetic content, and postpone some effective dates. More than 100 business owners signed a letter expressing concerns about the bill, arguing it would “severely stifle innovation and impose untenable burdens on Colorado's businesses, particularly startups.”

  • Alaska: Lawmakers amended a political deepfake bill (AK HB 358) to strip provisions relating to child sexual content and instead insert provisions for a civil action for an individual harmed by a political deepfake without a disclosure. The proposal still includes a section to provide a right to sue for defamation for the use of a deepfake.

Notable Proposals

  • Alaska: On Thursday, lawmakers amended a bill (AK HB 107) to clarify that a “person” under criminal law statutes does not include artificial intelligence or inanimate objects. The bill remains in the House, its chamber of origin. 

  • New York: On Wednesday, Assemblywoman Linda B. Rosenthal (D) introduced a bill (AB 10020) that would prohibit landlords from using algorithmic devices to set the amount of rent to charge a residential tenant. She introduced a similar measure last year in response to a ProPublica report on the use of such devices to set rents around the country.

  • North Carolina: Lawmakers filed a number of AI-related bills this week for the session that began in late April. Among the proposals are a political deepfake bill (NC SB 880), a bill amending child pornography laws to include deepfakes (NC SB 828), a bill to establish artificial intelligence hubs at educational institutions in the state (NC HB 986), and two bills creating AI study committees (NC HB 1004 and HB 1036).

Previous
Previous

Colorado Governor Receives Landmark AI Bill

Next
Next

Connecticut's AI Adventure: Senate Passes Amended SB 2