States Shield Consumers from AI: Is Your Industry Next?
This session, beyond their focus on combating deepfakes or comprehensively regulating the development of AI models, state lawmakers have introduced dozens of bills aimed specifically at AI use in particular industries. Last week we discussed how lawmakers are looking to protect certain industries from AI competition (e.g., music, movies, fashion), but this week we’ll examine legislation aimed at protecting the customers of specific industries — such as insurance, health care, legal, and housing — from AI use.
Tennessee's AI Deepfake Defense: The ELVIS Act
Deepfakes have been an easy target of state lawmakers throughout the past year. At first, these bills focused on deepfake uses in political campaigns or nonconsensual sexual exploitation. But we’ve reached the stage in the narrowly focused phase of AI regulation that lawmakers are expanding to new areas where deepfakes may wreak havoc on society. The latest is Tennessee’s ELVIS Act.
AI Industry Weighs In: Amended Connecticut SB 2
After months of work by a legislative work group, Connecticut lawmakers proposed a landmark comprehensive AI bill last month that could set a template for other states to follow. Not surprisingly, lawmakers have already amended the original bill to address numerous concerns raised after weeks of testimony from stakeholders. The proposal reflects one of the first major attempts to set broad guardrails for the emerging AI industry.
Utah’s Moderate Approach to AI Regulation
Utah enacted a package of AI bills into law this week. While not revolutionary, Utah’s legislative package represents a middle ground that most state policymakers are taking on AI regulation. These bills do not establish a comprehensive framework for the development and deployment of AI (they’ll let California and Connecticut take the lead on that), but the Utah bills address pressing issues (deepfakes and consumer protections) and align current laws with the new realities that AI tools represent, with particular emphasis on protecting vulnerable populations and mandating transparency.
Data Digesters Beware: An Analysis of CA AB 3204
So far, much of the state-level focus of AI regulation has targeted the outputs the systems produce — how AI models impact individuals. Far less attention has been given to the inputs fed into the systems — the terabytes of data developers use to train AI models. But the California Legislature is currently considering a bill that would do just that.
The Three Phases of State AI Regulation
We’re tracking over 500 AI-related bills in the states and I thought it might be a good time to take a step back and evaluate the landscape. The way that I look at the state legislative landscape for AI right now is by viewing the progression of AI bills within three stages: study bills, narrowly focused bills, and comprehensive legislation. Right now, we’re moving out of the study phase, are deep in the narrowly focused phase, and are just beginning the comprehensive phase.
Connecticut’s AI Vision: An Analysis of SB 2
This week, lawmakers in Connecticut released the text of their much-anticipated AI bill (CT SB 2), which provides a comprehensive framework aimed at the development and deployment of AI models, while providing a longer timeline of enforcement with rolling effective dates and safe harbor provisions. It also compiles many of the narrower focused legislation we’ve seen enacted in a handful of other states, addressing issues like deepfakes and government use.
California’s Focus on AI Development: An Analysis of SB 1047
A sexual deepfake bill was signed into law in South Dakota, and additional deepfake bills passed the full legislature in New Mexico and made it through their chambers of origin in South Dakota, Utah, and Wisconsin. Plus, we saw new AI executive orders in Massachusetts and Washington, D.C. But this week we’re going to focus our analysis on an important bill recently introduced in California, which might be the most comprehensive piece of state AI legislation to date.
The Role of Impact Assessments in Combating AI Biases
“Impact assessments” have been a key reporting requirement of many comprehensive AI bills that lawmakers have introduced this year. They are one of the safeguards being put in place to help combat disparate treatment and discriminatory outcomes from the use of AI tools in hiring, education, and other settings. Like mandatory disclosures, impact assessments are another early tool policymakers are using to regulate AI as the technology becomes an increasingly common aspect of our daily lives.
States Pursue AI for Economic and Workforce Development
Much of our AI coverage has focused on potential regulation of the new technology, but policymakers are also seeking to harness the power of AI to improve the lives of their residents. No state wants to be left behind in what could be a revolutionary change to workforce, education, and government. Accordingly, governors and lawmakers have proposed ways to attract AI business and ready their state’s workforce for a new AI world.
States Broaden the Scope of AI Regulation
States have largely focused on regulating specific instances of AI use (e.g., sexual or political deepfakes) and largely held off on a more comprehensive regulatory action last year. The closest we’ve seen is the draft automated decision-making regulations in California that were sent back to the drawing board. But after studying the issue closely, lawmakers are prepared to expand the legislative debate in 2024 and take aim at AI more broadly, taking inspiration from President Biden’s Blueprint for an AI Bill of Rights.
Transparency in the Age of AI: The Role of Mandatory Disclosures
Policymakers face a tough challenge when it comes to regulating the fast-moving AI industry. However, disclosure requirements are one tool policymakers have embraced. By mandating that you must disclose when you’ve used AI to generate media or an AI tool, policymakers thread the needle of providing useful information to consumers while not placing a substantial roadblock in front of the nascent industry’s development. Recent reports of low-quality, AI-generated books and articles flooding the market highlight the importance of disclosures.
Lawmakers Respond to AI-Induced Job Displacement
The flip side of AI technology’s economy-boosting potential is the assumption that if AI succeeds as currently imagined, it’ll replace much of today’s workforce. Unsurprisingly, this is a major concern for state lawmakers and their constituents. Policymakers are again placed in the difficult position of balancing protections for individual workforces against the economic productivity gains promised by AI. So far, state lawmakers have proposed several strategies to address this issue.
Balancing Act: What to Expect on State AI Policy in 2024
The arrival of generative AI tools last year brought incredible possibilities but it also raised concerns about the potential impact of the new technology. The public and experts think regulations will be necessary. And policymakers have vowed to take quick regulatory action to ensure they will not fall behind the rapidly accelerating technology as they did with data privacy regulation. What can we expect from state AI policy in 2024?
States Address the Alarming Proliferation of Nonconsensual Sexual Deepfakes
Despite all the promise and benefits of AI technology, we’re already seeing some of the real-world, negative impacts as AI is used to produce nonconsensual, sexual deepfake images and videos showing real individuals depicted in a sexually explicit manner. These images often contain the face of an actual person on a naked or partially clothed body that is not their own and disproportionately targets women. Along with deepfakes aimed at electoral candidates, states are moving quickly to combat the alarming proliferation of nonconsensual sexual deepfakes.
Lessons from Regulating Facial Recognition Technology
Even with recent and highly anticipated releases, AI itself is not all that new, and policymakers have addressed AI use cases long before ChatGPT burst on the scene a year ago. ago. One example is facial recognition technology — something that policymakers quickly became skeptical of, especially when used on the public without consent. States initially moved forward but eventually backtracked on broadly banning facial recognition use by law enforcement. However, the leader in regulating how technology like facial recognition is used is Illinois.
Understanding California’s Proposed AI Rules: Notifications and Opt-Outs
On November 27, 2023, the California Privacy Protection Agency (CPPA) released a draft text of regulations related to businesses' use of “automated decision making technology.” The draft regulations, if adopted, would establish a framework for how businesses can implement automated decision making technology that uses personal information to make a decision or acts as a replacement for human decision making. A key issue that the CPPA draft regulations address is when a consumer can and cannot opt-out of a business’ use of this technology.
Lawmakers Address A Direct Threat from AI: Their Campaigns
Last week, important odd-year elections were held in a handful of states, which means we’re officially in the 2024 presidential election cycle. As lawmakers contemplate their reelection campaigns, a direct concern of theirs, a concern shared by industry leaders, is AI’s use to influence 2024 election results. AI, in the form of “deepfake” images, audio, and video could be used to manipulate images and recordings to change a candidate’s speech into something they didn’t say, alter a candidate's movement in an embarrassing manner, or doctor images to perpetuate false narratives.
AI Policy 101: Key Trends in State Legislation
In total, state lawmakers have introduced over 160 bills related to AI this year. Notably, the vast majority of this legislation failed to move past the committee stage of the legislative process. While a few of these bills focused on generative AI models, many of these were intended to bring more attention to the emerging issue. Most of the bills considered this year can be organized into six categories: Facial Recognition Technology, Protections Against Biases, Deepfakes, Social Media Regulation, AI Use in Government, and Study Groups.
How to Define AI?
We’ve established that artificial intelligence is the issue du jour in state capitols today and that state lawmakers are solidly in the education stage of the policymaking process around AI regulation. But it’s helpful to take a step back and ask what are we even talking about? What is “artificial intelligence”? Policymakers and industry insiders have yet to settle on a universal definition, and lawmakers need a more precise and narrow definition of AI in order to regulate the technology in legislation. So far, states have proposed various definitions.