Deepfakes & Synthetic Media
States Address the Alarming Proliferation of Deepfakes and Synthetic Media
As the capabilities of generative AI technology advance, the ability to create realistic-looking fake images, videos, and voices becomes easier and easier. This proliferation of “deepfake” media is a major concern for policymakers. To begin, states addressed this issue in a targeted manner, taking aim at deepfakes in political election campaigns and nonconsensual deepfakes depicting someone in a sexual manner. But as the technology has progressed, lawmakers are exploring ways to regulate AI-generated media more generally, often defining the product as “synthetic media” and requiring disclosure or watermarks so that consumers understand that the images were created by AI. Some laws go as far as banning the creation or dissemination of such media, especially in the case of sexual content.
We offer specific resources to keep track of legislation addressing both sexual deepfakes and electoral deepfakes.
We see the legislative trend to regulate deepfakes and synthetic media to be a top priority of states this year. To keep up with this issue, see the map and table below for real-time tracking of state and federal legislation related to deepfakes sourced from MultiState’s industry-leading legislative tracking service.