Combating Sexual Deepfakes

States Address the Alarming Proliferation of Nonconsensual Sexual Deepfakes

Despite all the promise and benefits of AI technology, we’re already seeing some of the real-world, negative impacts as AI is used to produce nonconsensual, sexual deepfake images and videos showing real individuals depicted in a sexually explicit manner. These images often contain the face of an actual person on a naked or partially clothed body that is not their own and disproportionately targets women. Along with deepfakes aimed at electoral candidates, states are moving quickly to combat the alarming proliferation of nonconsensual sexual deepfakes.

While generative AI chatbots are relatively new, nonconsensual sexual deepfakes date back to as early as 2017. As AI technology accelerates, the realistic quality of AI-generated images and videos continues to improve, producing images that can easily be confused for a real person. The volume of deepfakes is also accelerating. Independent researchers found that in the first nine months of this year, 113,000 deefake images were uploaded to 35 different websites that either exclusively or partially host sexual deepfake videos — a 54% increase from all of 2022. Additionally, the research conducted on sexual deepfakes only includes images found on public websites, there are likely far more images exchanged via text messages and messaging apps that researchers are unable to account for.

Unlike many AI-related issues, state policymakers have quickly moved to address the rise of nonconsensual sexual deepfakes. Most states already have statutes prohibiting the sale or distribution of nonconsensual pornographic images, which lawmakers can amend to explicitly include deepfakes. Currently, nine states have enacted legislation directly targeting sexual deepfakes. Some argue that current laws addressing the transmission of nonconsensual pornographic images would be broad enough to cover deepfakes in many states, but lawmakers want to update those statutes to ensure their inclusion.

In 2019, Virginia became the first state to do so (VA HB 2678) by adding nonconsensual sexual deepfakes to an existing “revenge porn” law. California (CA AB 602) also enacted a sexual deepfake law in 2019 and lawmakers in Hawaii (HI SB 309) and Georgia (GA SB 78) followed suit in 2021. The trend continued in 2023. Illinois enacted legislation (IL HB 2123) establishing a cause of action for individuals who had their image used in a sexual deepfake without their consent and later in 2023 the governor signed another bill (IL SB 382) into law, which added the term “digitally altered sexual image” to the Illinois Remedies for Nonconsensual Dissemination of Private Sexual Images. Notably, the laws in California and Illinois give victims the ability to file lawsuits against perpetrators but do not carry criminal penalties. In contrast, Texas (TX SB 1361), New York (NY SB 1042A), and Minnesota (MN HF 1370) enacted legislation in 2023 adding criminal offenses to deepfake laws. Additionally, Louisiana (LA SB 175) and Texas (TX HB 2700) enacted laws specifically prohibiting minors from being depicted in any sexually explicit deepfake image. In early 2024, South Dakota enacted a bill (SD SB 79) adding computer-generated content to child pornography laws. In 2024, Indiana was the first state to pass legislation to protect its residents from nonconsensual sexual deepfakes (IN HB 1047) followed by Utah (UT HB 148, HB 238, SB 66), Washington (WA HB 1999), Idaho (ID HB 575), Iowa (IA HF 2240), and New York (NY AB 8808).

To keep up with this issue, See the map and table below for real-time tracking of state and federal legislation related to political deepfakes sourced from MultiState’s industry-leading legislative tracking service.