The Return of Connecticut’s SB 2: Algorithmic Discrimination
Key highlights this week:
We’re tracking 683 bills in 47 states related to AI during the 2025 legislative session.
Lawmakers in Virginia sent their high-risk consumer protection AI bill to the governor for his signature or veto. The governor will have 30 days to decide the bills fate.
Sen. Maroney in Connecticut has released the full language of his high-risk consumer protection AI bill, which is the focus of this week’s deep dive below.
Last year, many expected Connecticut to become the first to pass artificial intelligence regulation due to the leadership of bill sponsor Sen. James Maroney (D). While his bill easily passed the Senate, it stalled out in the House when Gov. Ned Lamont (D) threatened to veto it. After consulting with stakeholders, Sen. Maroney has returned with a new bill he hopes to shepherd to the finish line this year. But it remains to be seen if Gov. Lamont will again play spoiler.
After a legislative defeat last year, Sen. Maroney hopes to use the landmark Colorado law, enacted last year and modeled off Maroney’s own Connecticut bill, as a framework, vowing to “stay as close as possible to the Colorado bill." The senator introduced a placeholder bill in January, but the full draft language (conveniently numbered CT SB 2 again this year) was finally released this week.
Like many of the high-risk consumer protection AI bills we’re watching this year, the Connecticut bill would require obligations on developers and deployers of high-risk AI systems. We’re now watching similar bills in 10 states, and Virginia’s version (VA HB 2094, which we analyzed here) passed the legislature this week, sending it to Gov. Younkin (R) who has 30 days to decide whether to sign it into law or veto the bill.
A new feature of Connecticut’s bill this year is are requirements on “integrators,” defined as an entity that integrates an AI system into a product or service. This distinction was not present in last year’s Connecticut bill, but the concept was introduced in the original version of Virginia’s bill this year, although those provisions were removed in a later amendment.
This year’s version of SB 2 also departs from Colorado’s law by requiring obligations from not only developers of “high-risk” AI systems but also developers of “general-purpose” AI systems. The bill would require general-purpose AI developers to maintain technical documentation of training and testing processes, intended tasks, acceptable use policies, and data used to train the model. Systems used only for internal purposes that are not intended to interact with consumers would be exempt, although even then the developer would need a risk management policy. The Connecticut proposal would differ from Colorado’s definition of "substantial factor" for AI involved in decision-making to mean a factor that "alters the outcome of a consequential decision," not merely one that "assists in making a consequential decision." By focusing on the outcome instead of simply using AI in the process, this is a narrower version of “substantial factor” that should cover fewer use cases of AI tools.
Sen. Maroney did depart from last year’s bill in some ways to adhere closer to the standard set in Colorado. Drawing from Colorado’s law, this year’s bill details exemptions for AI uses not intended to be included as “high-risk” systems to be regulated, such as anti-virus software, calculators, spreadsheets, and spellcheck. What constitutes a “consequential decision” no longer includes a criminal justice remedy. This year’s bill has a stricter definition for what constitutes an “intentional and substantial modification” to a system requiring an update of documentation. Last year’s Connecticut bill did not specify that the obligations for developers and deployers to use reasonable care to protect consumers from risks of algorithmic discrimination applied only to the “intended and contracted uses” of the system, a detail the Colorado law and this year’s Connecticut bill explicitly state.
Narrowing the applicability of the proposal, this year’s Connecticut bill only requires notice of algorithmic discrimination to the Attorney General if it affects at least 1,000 consumers. It also exempts deployers from certain obligations if they are in contract with a developer who has assumed deployer duties, the system is not trained exclusively on the deployer’s own data, the system is used for its intended uses, and impact assessments are still provided.
Continuing the theme of transparency, the Connecticut bill requires disclosure to consumers upon an interaction with AI. Disclosure is also required when a consequential decision is made, but only if the AI is a “substantial” factor (as Colorado’s law requires), not a “controlling” factor (the standard set by last year’s SB 2). This year’s bill also gives consumers an opportunity to correct data. However, unlike Colorado’s law, the opportunity to appeal an adverse decision is only available if it was based on inaccurate data.
Finally, the bill includes a myriad of other AI-related policy concerns that the Colorado law did not address. The Connecticut bill would require synthetic content to be marked, but “detectable,” and not machine-readable as last year’s bill required. The proposal also makes it unlawful to distribute nonconsensual sexual deepfakes, a rule that 21 other states have put in place but Connecticut has yet to address.
The bill also includes several economic development and training provisions, perhaps a nod to Gov. Lamont who has a bill of his own recommendations (CT SB 1249) focused on developing the industry in the state. Both bills propose an AI regulatory sandbox program, similar to the one implemented in Utah. Sen. Maroney’s bill also includes provisions to use AI to improve government efficiency, direct training and education programs, and establish a computing cluster to provide resources to entrepreneurs.
Gov. Lamont has continued to express reservations that regulation could stifle innovation and drive AI startups elsewhere. His veto pen could loom large, but the passage of a law in Colorado, and efforts to pass similar legislation in states like Maryland, Texas, and Virginia could ease qualms about the state being such an outlier.
Recent Developments
In the News
Grok 3: Another week, another AI model release. On Monday, xAI (Elon Musk’s AI venture) released a beta version of it’s flagship Grok 3 family of AI models. Included is Grok 3 Reasoning, a reasoning model that uses similar techniques as OpenAI’s o models and Deepseek’s R1 model.
Major Policy Action
Virginia: On Thursday, lawmakers passed their version of a high-risk consumer protection AI bill (VA HB 2094, which we analyzed here), sending it to the governor for his potential signature to become law. Because the legislature is scheduled to adjourn on Saturday, the governor will have 30 days to decide whether to sign the AI bill into law. If signed, Virginia would likely become the second state to enact such a bill, following Colorado last year, and it would go into effect in July 2026.
Arkansas: On Wednesday, lawmakers passed a digital replica bill (AR HB 1071) sending it to the governor for her signature to become law. The bill would amend the Publicity Rights Protection Act to include images and voice generated through artificial intelligence.
Notable Proposals
Arkansas: Senator Clint Penzo (R) and Rep. Stephen Meeks (R) have sponsored a bill (AR SB 258) that includes both comprehensive privacy provisions giving consumers rights over their data and comprehensive regulation of AI with obligations on developers and deployers. The Digital Responsibility, Safety, and Trust Act would also require notice to consumers subject to a decision with significant effect where AI was a substantial factor.
California: Senator Josh Becker (D) has introduced a bill (CA SB 468) to require a high-risk AI deployer to have a duty to protect personal information. He is also expected to propose legislation to amend the California AI Transparency Act (CA SB 942) he sponsored last year that required generative AI models to make an AI detection tool available to consumers.
Colorado: On Tuesday, a group of lawmakers from both chambers sponsored a bill (CO HB 1264) to prohibit surveillance-based price discrimination and surveillance-based wage discrimination. There are similar proposals in California (CA AB 325, AB 446, and SB 295), Georgia (GA SB 164), Illinois (IL SB 2255), and Ohio (OH SB 79).
Kentucky: Lawmakers in both chambers introduced legislation (KY HB 672/SB 4) that would guide state use of AI through the creation of an Artificial Intelligence Governance Committee. The measures would also prohibit political deepfake communications without a disclaimer.
Vermont: Rep. Monique Priestley (D) plans to introduce AI legislation with a private right of action. Priestley sponsored a consumer data privacy bill last year that nearly became the first in the nation to include a private right of action before it was vetoed by Gov. Phil Scott (R).