Can State Laws Actually Stop Political Deepfakes?
A bit of a mixed bag of AI-related legislation moving through the states this week. Some key highlights:
Virginia’s governor signs an AI study bill and legislation limiting police use of facial recognition technology into law. Maryland moved a facial recognition bill as well.
Deepfake bills pass their house of origin in Maryland, Louisiana, Mississippi, and Minnesota.
Lawmakers in Colorado introduce another comprehensive AI regulation aimed at developers into the mix.
We’re currently tracking575 state bills (plus another 99 congressional bills) related to AI this year.
It’s been five months since we originally dove into the topic of political deepfakes. Since then, seven additional states have enacted laws to limit the use of deepfake media in electoral campaigns, making a total of eleven states so far. With only seven short months until this year’s major elections, lawmaker’s interest in this topic has only increased.
Laws to limit political deepfakes were first enacted in California and Texas in 2019, followed by Minnesota, Washington, and Michigan in 2023. So far, six states have enacted similar laws in 2024. The key features for most of the political deepfake laws is a requirement that political communications created with AI must be disclosed as such. So even in these states, if your AI-generated political communication includes a clear disclaimer that it’s AI-generated, you should be in the clear.
Texas (TX SB 751) and Minnesota (MN HF 1370/SF 1394) enacted laws that go one step further and criminalize deepfakes that intend to influence elections. Importantly, these two laws do not include an exception to the prohibition if the deepfake includes a disclosure. In Texas, it’s a criminal offense to create and publish a deepfake video within 30 days of the election if there’s intent to injure a candidate or influence the result of the election. Similarly, it’s a crime in Minnesota to disseminate a deepfake within 90 days of the election if the person depicted does not give consent and it’s made with the intent to injure a candidate or influence the result of an election. The Minnesota law includes a private right of action in addition to the criminal penalty. This week, the Minnesota House passed a bill (MN HF 4772) to amend the state’s political deepfake law, replaciung the current standard of “reasonable knowledge” with a “reckless disregard” standard for whether the media is created by AI and expanding the type of elections it applies to beyond only a general election.
So far this year, we’re tracking 113 pieces of legislation related to political deepfakes. The Federal Election Commission (FEC) has prohibited deepfake robocalls used to influence elections, but the FEC has yet to decide whether existing federal law against “fraudulent misrepresentation” in campaign communications applies to AI-generated deepfakes. Commissioners voted to move forward on a petition to clarify that question back in August. But with elections fast approaching, the hope of a federal solution in time to make a difference this year is fading — which leaves the regulation of political deepfakes in the hands of a dozen or so states.
The real questions arise once we contemplate how these laws will be enforced and what consequences they’ll have on actual election campaigns. New Mexico’s law (NM HB 182), which goes into effect next month, tasks the state’s Ethics Commission with enforcement of the law. However, after signing the bill, Gov. Lujan Grisham (D) said that the Ethics Commission had “expressed concerns to my Office about this legislation’s enforceability.” States that have enacted political deepfake laws either rely on agency enforcement or leave it to the campaigns targeted by such deceptive media to file civil lawsuits themselves. The exception is the Texas law, which lacks civil enforcement and would require prosecutors to initiate a criminal case against violators. But if serious violations arise, will current penalties be enough of a deterrent or be enforced quickly enough to actually stop deceptive AI-generated media from effecting elections?
From a practical standpoint, we’re already seeing AI-generated materials aimed at electoral candidates circulate online. But it’s unclear what effect these deepfakes will have on actual voters. At the very least, the proliferation of deepfake media will muddy the waters and either require voters to investigate what is real and what is fake or else relinquish any faith in the veracity of most images and videos they come across. With a patchwork of laws to limit deepfake use in elections, questionable enforcement mechanisms, and no federal solution, the 2024 election cycle could become the wild west for political deepfakes.
Finally, these laws will face a skeptical judiciary that places a high bar on First Amendment rights. These constitutional considerations are likely why lawmakers have largely stuck to disclosure requirements for political deepfakes instead of outright bans. As for the states with stricter prohibitions, will the consequences of violating those laws be enough to stop a flood of deceptive media in the final days before votes are cast? State lawmakers are taking these issues seriously, but we might not have a full understanding of the effectiveness of these laws until after this year’s elections have concluded.
Recent Developments
In the News
Frontier Models Get Upgrades: This week, the three frontier AI companies all announced upgrades to their AI models. Anthropic’s Claude model can now use tools. Google unveiled new features for its Gemini model at its “Cloud Next” conference. And OpenAI wasn’t left out, announcing a “majorly improved” version of its ChatGPT-4 Turbo model.
Major Policy Action
Virginia: On Monday, Gov. Youngkin (R) signed two AI-related bills into law. A new study law (VA SB 487) directs the Joint Commission on Technology and Science to conduct an analysis of the use of AI by public bodies in the Commonwealth and the creation of a Commission on Artificial Intelligence with a report due to legislative leaders by Dec. 1, 2024. A facial recognition law (VA HB 1496) requires all localities to provide the Department of Criminal Justice Services with a list of all surveillance technologies used by law-enforcement agencies of the locality and requires the Commission on Technology and Science to conduct a study on the use of such surveillance technology and implications of its use, susceptibility to misuse or cyberattack, and cost.
Maryland: On Monday, a conference committee approved a bill (MD SB 182) that limits the use of facial recognition technology by a law enforcement agency and preempts local regulations on the subject. The bill will now be sent to the governor for his signature to become law. The House also unanimously approved a sexual deepfake bill (MD SB 858), sending it back to the Senate to approve amendments.
Louisiana: This week, the House passed a political deepfake bill (LA SB 97 ) and the Senate passed a sexual deepfake bill (LA SB 6), sending both bills to the opposite chambers for consideration.
Mississippi: On Monday, the Senate passed a sexual deepfake bill (MS HB 1126), but added amendments which must be approved by the House before the bill will reach the governor’s desk. The bill would add computer-generated images depicting minor children in an explicit nature to child exploitation provisions.
Minnesota: On Monday, the House passed a bill (MN HF 4772) to amend the state’s political deepfake law, sending it to the Senate for consideration. The bill would replace the current law’s standard of “reasonable knowledge” with a “reckless disregard” standard for whether the media is created by AI. The bill also expands the time period enforcement of the law will take place beyond only a general election.
Pennsylvania: On Wednesday, the House passed a measure (PA HB 1598) that requires clear and conspicuous disclosure for any content generated by artificial intelligence. The bill also amends child pornography laws to provide it is not a defense that material was generated through artificial intelligence.
Notable Proposals
Colorado: On Wednesday, Senator Rodriguez (D) introduced a comprehensive AI bill (CO SB 205), which would require disclosures for the deployment of a high-risk AI system that makes a consequential decision concerning a consumer.