AI Policy 101: Key Trends in State Legislation

The purpose of multistate.ai is to provide timely updates and deep dives on how state and local governments are regulating artificial intelligence (AI) technologies. But first, it’s useful to take a step back and analyze the big picture of what legislative actions state lawmakers have introduced related to AI so far this year. Despite President Biden’s recent AI Executive Order, we anticipate state legislatures will be the primary battleground for substantive AI regulation in 2024 and beyond. 

States have only begun to dip their toes into regulating AI, which will be a bipartisan affair. State Senator Alberts (R), while co-chairing a hearing on AI in Georgia last week, said “We do not want to stifle innovation here. But we want to establish guardrails to protect Georgians.”

In total, state lawmakers have introduced over 160 bills related to AI this year. Notably, the vast majority of this legislation failed to move past the committee stage of the legislative process. While a few of these bills focused on generative AI models, many of these were intended to bring more attention to the emerging issue, like Rep. Jake Auchincloss’s bill in Massachusetts that was drafted using ChatGPT. Most of the bills considered this year can be organized into six categories: Facial Recognition Technology, Protections Against Biases, Deepfakes, Social Media Regulation, AI Use in Government, and Study Groups.

Facial Recognition Technology

Facial recognition technology takes a captured facial image and uses AI to match the image with others in its database to identify a person. The technology has proved to be very useful to unlock devices, verify identification at banks, and identify criminals. But civil liberties advocates have raised concerns over the infringement of privacy, issues of informed consent, the protection of data collected, and potential bias after studies showed the technology is less accurate for cases involving women and people of color.

In 2019, San Francisco became the first major city to ban facial recognition technology by the police and government agencies, and many cities followed suit, as well as the states of Vermont and Virginia. But as crime rose, many cities began to claw back those prohibitions. Virginia lifted its ban, instead regulating the use of the technology by law enforcement, and Vermont made an exception for cases involving children

Nonetheless, concerns persist, and Montana passed a law this year prohibiting continuous facial surveillance by law enforcement, and limiting the use to only certain crimes where a warrant was obtained. California lawmakers are considering a bill that would prohibit the use of facial recognition technology on video captured from a police body camera.

There have even been a few bills introduced that would apply to the private sector, particularly retailers. Bills in Connecticut and Texas would require disclosure to customers of the use of facial recognition technology, while a bill in New Jersey would have banned retailers from using the technology except for a “legitimate safety purpose.”

Protections Against Biases

Facial recognition technology has had accuracy issues with women and people of color in part due to the systems being trained on datasets largely derived from white males. But systematic biases can affect other artificial intelligence tools. Algorithms and AI tools are being used to determine eligibility for housing, employment, health insurance coverage, and creditworthiness, but some tools may have a disparate impact on certain communities.

Policymakers have sought to protect these communities against AI biases. New York City recently passed a law regulating the use of AI in hiring and requiring regular bias audits. Other cities and states have contemplated similar proposals. Illinois lawmakers considered a bill to prohibit predictive data analytics in employment decisions from considering an applicant’s race or using zip code as a proxy for race. Bills in California, Massachusetts, New York, Rhode Island, and the District of Columbia would prohibit algorithms from discriminating based on protected classes and, in some cases, require an impact assessment to determine the risk of discrimination. 

Deepfakes

New AI tools have enabled users to prompt remarkable works of art from simple lines of text. Unfortunately, this also allows anyone to create completely fake images and even videos of events that never happened. There were 36 bills introduced this year to regulate what are known as “deepfakes,” falling into three categories. 

  1. First, states considered legislation that provided civil and criminal penalties for the dissemination of nonconsensual digitally altered sexual images. Minnesota created a cause of action for victims of digitally altered sexual content and criminal punishments of up to three years in prison. Illinois passed a bill that amended their existing “revenge porn” law to include digitally altered images. Texas passed a package of child online safety bills that included measures to prohibit AI-altered digital images of minors, and Louisiana also made it a crime to disseminate sexually explicit deepfake content involving minors.

  2. States are also addressing deepfakes in the political space. Political campaigns have begun to use AI, and while the Federal Election Commission may move towards regulating the use of AI, states will still be left to determine what regulation, if any, there will be for AI in state campaigns. The state of Washington has already taken action, providing injunctive and equitable relief for a political candidate who has had their appearance, action, or speech synthetically altered in electioneering communication. Minnesota’s deepfake bill also includes criminal penalties for using deepfake content to influence an election with a prison term of up to five years. However, it is unclear whether these laws would pass constitutional muster under the First Amendment. Michigan lawmakers have introduced a package of bills requiring disclosures for the use of AI in political advertising.

  3. Finally, lawmakers may seek to protect against other fraudulent uses of deepfake technology. A bill in Pennsylvania would make it a first-degree misdemeanor for a person to disseminate an AI-generated impersonation of an individual without consent. Another bill in the Keystone State would require all AI-generated content to be disclosed as such. As AI-generated content continues to proliferate, lawmakers will feel more pressure to regulate its use to ensure the public is protected.

Social Media Regulation

Many lawmakers targeted social media platforms this session, with many conservatives accusing platforms of censoring, deplatforming, or banning users based on political views. Platforms often use algorithms to prioritize certain content to different users, and conservatives have accused liberal-leaning developers of deprioritizing conservative content, despite studies that show little bias. Bills in Hawaii, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, and West Virginia would have required disclosures about how algorithms prioritize content, and in some cases, allowed users to opt out of algorithmic recommendations. 

Social media algorithms have also been targeted for the content they are promoting to minors who use their platforms. A North Carolina bill would have required certain informed consent to platform users about the algorithmic recommendations and prohibited data from minors to be used in such recommendations. A California bill would prohibit algorithmic features that cause children to be more addicted to the platform.

AI Use in Government

States have also looked inward in dealing with AI, with many bills calling for a full accounting of the use of AI by the state. Connecticut passed a bill requiring a full inventory of AI use by state agencies, with procedures and policies governing the use and procurement of AI systems. Texas passed a similar bill, creating an Artificial Intelligence Advisory Council to study AI systems used by state agencies, and requiring agencies to produce an automated decisions systems inventory report by July 1, 2024.

Study Groups

Finally, many lawmakers are still trying to fully understand AI, its benefits, and potential harms. To catch up to speed, many have created working groups to study the issue. Illinois lawmakers established the Generative AI and Natural Language Processing Task Force to report on generative artificial intelligence software and natural language processing software. Wisconsin Governor Tony Evers (D) issued an executive order creating a Task Force on Workforce and Artificial Intelligence. Connecticut passed a measure to create a working group on the use of AI in state government and Senator James Maroney (D) has organized a multi-state working group of lawmakers to study the issue. Several interim legislative committees are also studying the issue in anticipation of next year’s session. For a full listing of these study committees, task forces, and working groups dedicated to hearing from experts and stakeholders with the goal of developing recommendations on how best to regulate AI at this stage in the technology’s development, view our tracker here

Recent Policy Developments

  • Federal: On Oct. 30, President Biden signed a sweeping, 111-page executive order on “Safe, Secure, and Trustworthy Artificial Intelligence.” The order touches on many aspects of AI but still leaves much room for the states to maneuver on the issue. For smart viewpoints on this federal action, we recommend reading EY’s broad overview of the order as well as commentary from Timothy Lee’s Understanding AI, and Zvi Mowshowitz on the Executive Order

  • California: In an interview with Politico, Assemblymember Rebecca Bauer-Kahan (D) confirms that her AI consumer notification bill (CA AB 331) will be back in next year’s session. “What I had hoped for when I introduced AB 331 was that we could set the stage for a national standard, that we wouldn’t have patchwork regulation, but we would have other states follow our lead. But I think it is dangerous for us to let other states go first and set a standard that is not up to California standards — or that is not as nimble in allowing innovation.”

  • New York: Assembly Member Burke (D) introduced legislation (NY AB 8179) on Oct. 27 that would tax companies for every employee displaced as a result of “technology,” which the bill defines to include “machinery, artificial intelligence, or computer applications.” 

  • Illinois: Last week, the House held a joint committee hearing on AI between the Judiciary-Civil Committee and the Cybersecurity, Data Analytics, and IT Committee. Lawmakers warned that taking a hands-off approach to AI regulation, as they did with social media, would be a mistake.

  • Georgia: Last week, the Senate held a joint hearing on AI between the Senate Committee on Public Safety and the Committee on Science and Technology. Senator Albers (R), Chair of the Public Safety Committee, said “We're the number one state to do business. We have to be number one in AI as well."

Previous
Previous

Lawmakers Address A Direct Threat from AI: Their Campaigns

Next
Next

How to Define AI?