Consumer Protection Bills Addressing Algorithmic Discrimination
After Colorado enacted the first law to protect consumers from AI tools making high-risk decisions, states have introduced dozens more in 2025.
Trending legislation we’re keeping a close eye on in 2025 are the “comprehensive consumer protection” AI bills or “high-risk decision-making” bills that state lawmakers have introduced. These bills are based on 2024’s Colorado SB 205 and Connecticut SB 2 and aim to protect consumers by requiring transparency and thorough evaluation of potential bias when using an AI tool to make a consequential decision that affects consumers. Colorado was the first, and so far only, state to enact this type of law when Gov. Polis (D) signed SB 205. However, the governor’s signing statement expressed his “reservations” about the scope of the legislation. “I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike.” Colorado’s SB 205 won’t go into effect until February 2026. An AI Task Force in Colorado released a report in February 2025 outlining potential areas where the law can be clarified, refined, and otherwise improved.
In the aftermath of his bill failing last year, Connecticut Sen. Maroney has led a group of bipartisan lawmakers representing nearly every state in regular video calls to learn about the AI issue and develop policy considerations for states to take on. Those discussions have policy consensus built on the core requirements of last year’s Colorado and Connecticut legislation and members of this group plan to introduce these bills into at least a dozen or two states in 2025. We’ve done in-depth analysis of several of these proposals:
The New Wave of Comprehensive Consumer Protection AI Bills (Jan 17, 2025).
The Return of Connecticut’s SB 2: Algorithmic Discrimination (Feb. 21, 2025).
Virginia Moves Legislative Framework for High-Risk AI Systems (Feb. 7, 2025).
California's Proposed Rules on Automated Decision-Making Technology (Dec. 6, 2024).
Texas Proposal Targets AI Developers, Deployers, and Distributors (Nov. 1, 2024).
Colorado Governor Receives Landmark AI Bill (May 10, 2024).
Overall, these bills aim to protect consumers by requiring transparency and thorough evaluation of potential bias when using an AI tool to make a consequential decision that affects consumers. Key common features of these bills are to (1) establish a “reasonable care” standard to protect consumers from potential bias of “high-risk” AI systems, (2) mandate the drafting and publishing of “impact assessments,” and (3) require notification when an AI system is a “substantial factor” in making a “consequential decision” that affects a consumer.
That’s a lot of quoted legislative language, so let’s break that down one requirement at a time. First, these bills impose a duty of “reasonable care” on deployers of “high-risk” AI systems to protect consumers from foreseeable risks of algorithmic discrimination. The Colorado law defined “high-risk” AI systems as any system that makes or is a “substantial factor” in making a “consequential decision.” But what’s a “consequential decision”? The Colorado law says that a “consequential decision” has a material legal or significant effect on the procurement, cost, or denial of education or employment opportunities; financial or lending, essential government, legal or health care services; housing; or insurance. So, essentially any major industry that deals with consumers. These sections are so broad that any company using AI tools will have plenty of questions about which client or consumer interaction might trigger these laws, a concern raised by small businesses at a Colorado task force hearing to clarify implementation of the AI law.
Second, to ensure compliance with this reasonable care standard, the bills require a deployer of these AI tools to evaluate their AI systems for potential bias and publish (or simply write them and keep them on file) those results in an “impact assessment” report. As we’ve written previously, impact assessments are meant to safeguard against disparate treatment and discriminatory outcomes of AI use. These requirements mandate periodic reporting of AI tools to ensure they do not inadvertently result in disparate or discriminatory effects. These bills vary in their mandates of frequency and depth of these impact assessments and on what exactly triggers an update to the report. Regardless, impact assessments could result in mountains of paperwork for anyone looking to deploy AI tools in major industries.
Finally, many of these bills would require the deployer of “high-risk” AI systems to proactively notify any customer or client when that system is a “substantial factor” in making a “consequential decision” that affects the consumer. This transparency feature is common in even the less comprehensive AI laws, such as Utah’s AI Policy Act enacted last year. Transparency is the low-hanging fruit of the AI policy world and an important foundation to build on.