The New Wave of Comprehensive Consumer Protection AI Bills
Key highlights this week:
After state lawmakers introduced nearly 700 AI-related bills last year, we’re already tracking 304 bills as states gavel in their 2025 legislative sessions.
An executive order and proposed legislation in New York takes aim at protecting children from AI-powered chatbots.
California AG advises businesses on AI use and Arizona and Rhode Island look to recruit for AI study committees.
Connecticut Gov. Lamont (D) pumps the breaks on Sen. Maroney’s second try at enacting comprehensive consumer protection AI bill, but that legislation has inspired colleagues around the country, which is the topic of this week’s deep dive.
The AI policy trend that I’m keeping a keen eye on this year is what I’ll refer to as the “comprehensive consumer protection” AI bills. These originated last year with the introduction of Sen. Maroney (D) SB 2 in Connecticut. While that particular bill failed to cross the finish line due to a gubernatorial veto threat, legislation originally modeled off CT SB 2 passed the legislature in Colorado and was (reluctantly?) signed by Gov. Polis (D) into law. Notably, that law (CO SB 205) won’t go into effect until February 2026, giving lawmakers time to make necessary amendments.
In the aftermath of his bill failing last year, Connecticut Sen. Maroney has led a group of bi-partisan lawmakers representing nearly every state in regular video calls to learn about the AI issue and develop policy considerations for states to take on. Those discussions have policy consensus built on the core requirements of last year’s Colorado and Connecticut legislation and members of this group plan to introduce these bills into at least a dozen or two states in 2025.
The first hint of this next wave of comprehensive AI consumer protection legislation came late last year when Rep. Giovanni Capriglione (R) released a draft of his ambitious version of these bills that has now been formally introduced (TX HB 1709) in Texas. In the first few weeks of the year, we’ve identified 11 bills that meet this description introduced in seven states. These include the Texas bill referenced above, Connecticut’s 2025 version of SB 2 (as a placeholder for now), Virginia (VA HB 2094), New York (NY AB 768/SB 1962), Massachusetts (MA HD 396), New Mexico (NM HB 60), and Hawaii (HI SB 59). We’re hearing similar bills will drop in Georgia and Florida soon. But what exactly do these bills do and what features do they have in common?
Overall, these bills aim to protect consumers by requiring transparency and thorough evaluation of potential bias when using an AI tool to make a consequential decision that affects consumers. Key common features of these bills are to (1) establish a “reasonable care” standard to protect consumers from potential bias of “high-risk” AI systems, (2) mandate the drafting and publishing of “impact assessments,” and (3) require notification when an AI system is a “substantial factor” in making a “consequential decision” that affects a consumer.
That’s a lot of quoted legislative language, so let’s break that down one requirement at a time. First, these bills impose a duty of “reasonable care” on deployers of “high-risk” AI systems to protect consumers from foreseeable risks of algorithmic discrimination. The Colorado law defined “high-risk” AI systems as any system that makes or is a “substantial factor” in making a “consequential decision.” But what’s a “consequential decision”? The Colorado law says that a “consequential decision” has a material legal or significant effect on the procurement, cost, or denial of education or employment opportunities; financial or lending, essential government, legal or health care services; housing; or insurance. So, essentially any major industry that deals with consumers. These sections are so broad that any company using AI tools will have plenty of questions about which client or consumer interaction might trigger these laws, a concern raised by small businesses at a Colorado task force hearing to clarify implementation of the AI law.
Second, to ensure compliance with this reasonable care standard, the bills require a deployer of these AI tools to evaluate their AI systems for potential bias and publish (or simply write them and keep them on file) those results in an “impact assessment” report. As we’ve written previously, impact assessments are meant to safeguard against disparate treatment and discriminatory outcomes of AI use. These requirements mandate periodic reporting of AI tools to ensure they do not inadvertently result in disparate or discriminatory effects. These bills vary in their mandates of frequency and depth of these impact assessments and on what exactly triggers an update to the report. Regardless, impact assessments could result in mountains of paperwork for anyone looking to deploy AI tools in major industries.
Finally, many of these bills would require the deployer of “high-risk” AI systems to proactively notify any customer or client when that system is a “substantial factor” in making a “consequential decision” that affects the consumer. This transparency feature is common in even the less comprehensive AI laws, such as Utah’s AI Policy Act enacted last year. Transparency is the low-hanging fruit of the AI policy world and an important foundation to build on.
As with all the AI laws state policymakers are developing, enforcement will be key. Most of 2025’s comprehensive consumer protection AI bills leave enforcement to state attorneys general, but Hawaii’s bill includes a private right of action for “any person aggrieved by a violation” of the law to bring a civil action seeking an award of up to $10,000 per violation in damages (plus punitive damages and other relief “that the court determines appropriate”).
These bills are trying to accomplish a lot all at once. Another option is to address individual concerns independently using bills like we discussed last week around AI use in the rental industry. But there’s a decent chance that at least one of these comprehensive consumer protection AI bills both make it into law and go into effect soon.
Recent Developments
In the News
Google: This week, Google rolled out AI features in Gmail, Docs, Sheets, and Meet to all Workspace customers, providing access to AI tools that previously required an additional charge to all users. This is seen as a competitive move against Microsoft which has rolled out AI features to its own workplace tools.
Major Policy Action
Federal: In one of his final actions, President Joe Biden signed an executive order this week to bolster the construction of AI data centers. The order directs the Department of Energy and the Department of Defense to select sites that will be leased to private entities to construct data centers and clean power facilities. Entities selected to build at these sites will be required to bring online sufficient clean energy resources to match the electricity needs of the data center being constructed.
New York: Gov. Kathy Hochul (D) announced a new “Unplug and Play” initiative designed to protect children from the harms of social media and the internet. The initiatives include proposals to treat artificial-intelligence-generated child sexual abuse material as child pornography and regulate AI companionship companies to protect users from self-harm and remind them they are interacting with machines. Assemblyman Clyde Vanel (D) has already introduced legislation (NY AB 222) aimed at regulating chatbot interactions and is drafting another bill aimed at protecting children from chatbots.
Arizona: Gov. Katie Hobbs (D) is seeking applicants to serve on an AI Steering Committee to shape artificial intelligence policy in the state. The state passed a political deepfake bill (AZ SB 1359) last year but has yet to take up any broader legislation.
California: Attorney General Rob Bonta (D) issued two advisories for businesses that use AI of their legal obligations under current law. One advisory reminds businesses of obligations under unfair competition and false advertising laws, including prohibitions against falsely advertising AI accuracy or quality, using AI to foster deception, using a person's likeness without consent, and using AI to violate competition or civil rights laws. Another advisory applies to health care providers, warning against using automated decision systems in ways that lead to discrimination or without certain checks and human review.
Connecticut: On Tuesday, Gov. Ned Lamont (D) expressed reservations about a broad comprehensive regulatory bill on AI, instead favoring narrow legislation to address deepfakes. He told reporters, “We’ll see if there is anything else we’ve got to do in terms of guardrails, but I also don’t want to do anything that slows up innovation and makes that smart, young programmer think that maybe it’s a little safer to do this in Georgia than it is in Connecticut.”
Nevada: Secretary of State Cisco Aguilar (D) announced his legislative priorities this week, proposing his office review political ads containing AI-created images with disclosures to the public.
Rhode Island: Senate President Dominick Ruggerio (D) appointed Sen. Victoria Gu (D) to chair a new Senate Committee on Artificial Intelligence & Emerging Technologies. The committee has yet to schedule its first meeting, and will study “legislation and matters relating to emerging technologies, including artificial intelligence, and their societal, ethical and policy implications.”
Notable Proposals
Illinois: Insurer use of AI would be regulated under a proposed bill (IL HB 5918) from Rep. Bob Morgan (D). The measure would require any decision-making process for the denial, reduction, or termination of insurance plans or benefits that results from the use of AI systems or predictive models to be meaningfully reviewed.
Indiana: Rep. Joanna King (R) has introduced a bill (IN HB 1620) to require a health care provider to disclose the use of AI technology to make or inform any decision involved in the provision of health care or generate any part of a communication to the patient regarding the patient's health care. The bill would also regulate the use of AI by health insurers regarding coverage.
Massachusetts: A draft bill from Rep. Jay Livingstone (D) (MA HD 1861) would require a generative AI provider to apply provenance data to synthetic content to allow users to know the content was digitally created. The bill would also require social media platforms to include the provenance data and require cameras and devices to include the ability for users to include provenance data in user content.
Virginia: Among the large number of AI bills introduced this session is a bill (VA SB 1053) to include synthetic digital content in defamation, slander, and libel laws. The proposal would make it a Class 1 misdemeanor for any person to use any synthetic digital content for the purpose of committing any criminal offense involving fraud.
Washington: Rep. Clyde Shavers (D) introduced two measures related to AI this week. One bill (WA HB 1168) would require developers of generative AI to publicly disclose datasets used in development of the system. Another (WA HB 1170) would require generative AI systems to make available an AI detection tool to users to determine if content was digitally-created.