Tennessee's AI Deepfake Defense: The ELVIS Act
Lawmakers continued their focus on deepfakes this week as the election nears. Some key highlights:
Political deepfake bills are signed into law in Wisconsin and Oregon. There are now 10 states that have enacted laws to limit deepfake use during elections.
Gov. Little signs another sexual deepfake bill into law in Idaho after signing the state’s first sexual deepfake bill earlier this month.
Oregon also enacted an AI study bill, while California released guidance for state agencies to follow when buying generative AI tools for government use.
Deepfakes have been an easy target of state lawmakers throughout the past year. These AI-generated media are a perfect storm for state legislation because they combine headline-grabbing controversies, a use for AI that everyone can easily and quickly observe, and a straightforward legislative target that can be addressed with a relatively simple bill. At first, these bills focused on deepfake uses in political campaigns or nonconsensual sexual exploitation. But we’ve reached the stage in the narrowly focused phase of AI regulation that lawmakers are expanding to new areas where deepfakes may wreak havoc on society. The latest is Tennessee’s ELVIS Act.
Including this week’s batch of bill signings, 14 states have now enacted laws addressing nonconsensual sexual deepfakes and 10 states have enacted laws to limit the use of deepfakes in political campaigns. Last week, Tennessee became the first state to enact a deepfake law outside these two narrow categories when Governor Lee (R) signed the ELVIS Act (TN HB 2091) into law.
The Ensuring Likeness Voice and Image Security (ELVIS) Act’s primary objective is to protect musicians in Tennessee from AI-generated audio mimicking their singing voices. The law does this by amending the Personal Rights Protection Act (PRPA) of 1984 — which itself was originally enacted to protect the rights to Elvis Presley’s music for a period after the singer’s death. The original PRPA prohibited the unauthorized commercial exploitation of a person's name, image, and likeness.
The new ELVIS Act amends the PRPA in several ways with an eye toward AI-generated media. Notably, the bill itself never specifically mentions artificial intelligence. Instead, the law adds “voice” to the protected property rights that every individual in Tennessee holds. And voice is defined as “a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual
voice or a simulation of the voice of the individual.” Under both the PRPA and the substituted ELVIS Act, individuals (or their heirs) can file a civil lawsuit against anyone who uses a person’s likeness, including voice, without that individual's permission. This includes actual photos, videos, and recordings, and those produced by generative AI that depict that individual (even if it’s not actually them).
This isn’t just a hypothetical concern. Last year, an AI-generated track mimicking the voices of superstar artists Drake and The Weeknd went viral with millions of listens before studios forced its removal from popular online platforms. Recognizing the threat generative AI posed to the established music business, an industry with proud roots in Tennessee, the ELVIS Act was touted by Tennessee Governor Lee even before its official introduction and the bill passed through both legislative chambers unanimously.
This isn’t the only state entertainment industry-focused bill introduced this session. We highlighted a few of them earlier this year when Governor Lee introduced the ELVIS Act (January’s “Lawmakers Respond to AI-Induced Job Displacement”). Lawmakers in California are looking to protect Hollywood actors (CA AB 459) from generative AI, and New York wants to help protect fashion models from nonconsensual deepfakes (NY SB 2477). States are also broadening their concerns about deepfake impersonations beyond those who make their living from their image to intentional fraud and harassment (ID HB 575).
Tennessee might be the first, but it won’t be the last to specifically protect the rights individuals hold in their likeness. During the bill signing ceremony at a Nashville honkeytonk, Governor Lee told the crowd, “There are certainly many things that are positive about what AI does. It also, when fallen into the hands of bad actors, it can destroy this industry. . . . Tennessee should lead on this issue and we are."
Recent Developments
In the News
Federal Lab to Test AI: On Monday, the nonprofit MITRE opened a new facility in Virginia to test government uses of AI technology. AI Assurance and Discovery Lab is designed to assess the risk of systems using AI in simulated environments, red-teaming, and “human-in-the-loop experimentation,” and test systems for bias.
Gladstone Report: Earlier this month, a report commissioned by the US State Department and written by Gladstone AI revealed some startling safety and security concerns after interviewing hundreds of employees at top AI companies. “One of the big themes we’ve heard from individuals right at the frontier, on the stuff being developed under wraps right now, is that it’s a bit of a Russian roulette game to some extent,” says the report’s co-author. “Look, we pulled the trigger, and hey, we’re fine, so let’s pull the trigger again.”
Major Policy Action
Wisconsin: Last Thursday, Gov. Evers (D) signed a political deepfake bill (WI AB 664) into law, which requires disclosures for certain political audio or video communication if the communication contains synthetic media. The law is effective immediately.
Idaho: On Monday, Gov. Little (R) signed a sexual deepfake bill (ID HB 465) into law. The law adds to provisions relating to sexual abuse of children, prohibiting visual depictions including videos or images generated using generative AI or machine learning. Only last week, the governor signed another sexual deepfake bill (ID HB 575) into law, which makes it unlawful to disclose explicit synthetic media without consent if the disclosure would cause the identifiable person substantial emotional distress, or to harass, intimidate, or humiliate a person or obtain money through fraud.
Oregon: On Wednesday, Gov. Kotek (D) signed a political deepfake bill and an AI study bill into law. The deepfake bill (OR SB 1571) requires a disclosure of the use of synthetic media in campaign communications. Exempts content carriers and broadcasters. The AI study bill (OR HB 4153) will establish a Task Force on AI tasked with examining and identifying terms and definitions related to AI that may be used for legislation. Both bills go into effect immediately.
California: The state released formal guidelines, pursuant to Gov. Newsom’s 2023 executive order, for state agencies to follow when buying generative AI tools for government use.
Tennessee: On Monday, the Senate passed a sexual deepfake bill (TN HB 2163), after the bill previously passed the House, sending the bill to the governor for his signature to become law. The bill specifies that for the purposes of sexual exploitation of children offenses, the term "material" includes computer-generated images created, adapted, or modified by AI.
Kentucky: Last Friday, the Senate passed an AI chatbot disclosure bill (KY SB 266), sending it to the House for consideration. The bill would prohibit a bot from communicating or interacting with another person with the intent to mislead about its artificial identity in order to knowingly deceive the person about the content in the context of a commercial transaction.
Political Deepfakes: In addition to the political deepfake bills signed into law in Wisconsin and Oregon this week, legislative chambers of origin passed political deepfake bills in Alabama (MO HB 2628), Georgia (GA HB 986), and Missouri (MO HB 2628), sending the bills to the opposite chamber for consideration.
Notable Proposals
Nebraska: Lawmakers proposed a number of resolutions providing for interim studies on AI issues, including a study on the dangers posed by artificial intelligence for elections (NE LR 362), and political campaigns (NE LR 412), and a study on AI’s impact on private and public sectors, including the technology and insurance sectors (NE LR 430). Interim studies would take place some time after the legislature adjourns next month.
New Jersey: Senator Zwicker (D) introduced a bill (NJ SB 2964) that would create standards for independent bias auditing of automated employment decision tools. The Assembly has a companion bill (NJ AB 3855).