Lawmakers Address A Direct Threat from AI: Their Campaigns
Last week, important odd-year elections were held in a handful of states, which means we’re officially in the 2024 presidential election cycle. As lawmakers contemplate their reelection campaigns, a direct concern of theirs, a concern shared by industry leaders, is AI’s use to influence 2024 election results. AI, in the form of “deepfake” images, audio, and video could be used to manipulate images and recordings to change a candidate’s speech into something they didn’t say, alter a candidate's movement in an embarrassing manner, or doctor images to perpetuate false narratives. And in today’s viral social media environment, an order of magnitude more voters will view and share these deepfakes than will see any fact-check posts revealing the truth.
Deceptive media produced by AI to influence elections may originate from a variety of sources — individual campaigns, outside groups seeking to oppose or support a candidate or issue, or foreign adversaries seeking to influence the outcome of an election. It’s unclear at this point what type of impact deceptive AI deepfakes could have on elections, but this potential impact is being taken especially seriously given social media’s influence on recent elections. A repeated theme we’ve seen from policymakers looking to regulate AI is the regret that lawmakers failed to act on social media and that they won’t make the same mistake with AI.
And we’re already seeing the early effect of media distortions on elections. In 2020, a manipulated video of former House Speaker Nancy Pelosi went viral, which artificially slowed down her speech and made it appear slurred, falsely indicating that she was intoxicated during a press conference. This summer, a PAC supporting Florida Governor DeSantis released an ad using AI-generated audio of former President Donald Trump attacking Iowa Governor Reynolds. While the audio was based on posts Trump had made on social media, it did not reflect words that were actually spoken by the former president. In a runoff election for president in Argentina this month, AI-generated images of candidates are circulating, both positive and negative, as well as manipulated videos of candidates making statements that were not uttered by the candidate.
State lawmakers’ concerns over deepfake use in elections dates back to 2019, when lawmakers in California (CA AB 730) and Texas (TX SB 751) enacted bills into law to prohibit the use of deepfakes to influence political campaigns. Lawmakers in seven additional states introduced legislation this year aiming to reduce the harms that AI could pose to elections. Two of these bills were signed into law and went into effect earlier this year. Minnesota enacted legislation (MN HF 1370/SF 1394) to criminalize the use of deepfake technology to influence an election. Washington enacted a bill (WA SB 5152) to require disclosure when any manipulated audio or visual media is used in an electioneering communication.
This week, the Michigan Legislature passed a package of bills aimed at reducing the potential harms caused by AI in elections. These bills require disclosures (MI HB 5141) for pre-recorded phone messages and political advertisements that were created with AI, prohibit (MI HB 5144) distributing media that manipulates the speech or conduct of an individual within 90 days of an election without a disclaimer, and establish (MI HB 5145) sentencing guidelines for election law offenses related to deceptive media created with AI. These bills are now being sent to Governor Whitmer (D) for her signature to become law.
Many of these bills are looking to adapt existing regulations and policies to the emerging AI-enabled world. For example, bills requiring the labeling of AI-generated material are similar to existing rules requiring political advertisements to identify the group responsible for the advertisements. Additionally, bills that require a disclaimer to distribute media that shows manipulated speech of candidates within 90 days of an election are similar to policies within the federal government that prohibit certain actions within 90 days of an election.
The Federal Election Commission is seeking to amend current rules to cover AI-generated content and Congress has introduced legislation to require labeling of such content in political ads. Nonetheless, we expect more states to take on this issue in 2024 as elected officials watch high-quality AI-generated media of themselves spread before critical elections approach.
Recent Policy Developments
Michigan: Last Thursday, lawmakers passed a package of bills requiring a disclaimer for political advertisements that falsely represent an individual with manipulated speech or conduct meant to deceive voters within 90 days of an election. These bills are now being sent to Gov. Whitmer (D) for her signature to become law.
South Carolina: On Monday, House Speaker Murrell Smith announced the creation of a standing committee to examine the impact of artificial intelligence, as well as cybersecurity and cybercrime. Although the 19-member committee won’t officially start work until the 2024 legislative session. Rep. Jeff Bradley (R) will serve as chair, and he will ask the full House to vote next session on making the committee permanent.
California: On Tuesday, while attending the Council of State Governments (CSG) West meeting in Los Angeles, Assemblymember Evan Low (D) announced plans to introduce legislation next year that would require a watermark on AI-created images and videos to combat misinformation.
Federal: On Wednesday, U.S. Senators Thune (R) and Klobuchar (D) introduced draft legislation in Congress that would direct federal agencies to create standards aimed at providing transparency and accountability for AI tools. The bipartisan bill, following President Biden’s recent AI Executive Order, would set in place a system to require “critical-impact” AI organizations to self-certify as meeting compliance standards.
Pennsylvania: On Thursday, the Senate Democratic Policy Committee held a hearing to address the growing societal concerns regarding the effects of generative AI on the workforce, technology, and healthcare, and to deliberate on policy solutions that best mitigate risks such as fraud and misuse associated with this technology.