Colorado Governor Receives Landmark AI Bill
Key highlights this week:
We’re currently tracking 603 bills in 45 states (plus another 103 congressional bills) related to AI this year, 35 of which have been enacted into law.
On the last day of session, Colorado’s comprehensive AI bills passed the legislature and were sent to the governor for his signature to become law. Meanwhile, Connecticut’s similar landmark AI bill failed to pass the House after a veto threat.
Tenneessee’s governor signed a sexual deepfake bill into law.
While all eyes were on Connecticut to pass a comprehensive AI bill, Colorado lawmakers sent a similar AI bill through both chambers of the legislature on the last day of session, sending what could be a landmark AI law to the governor's desk. Meanwhile, the Connecticut bill fizzled out in the House due to a gubernatorial veto threat. Now, if Governor Polis (D) signs Colorado’s bill into law, it would be the broadest effort yet to impose obligations on AI developers to protect consumers with one important caveat: these provisions won’t go into effect until 2026.
Senator Robert Rodriguez (D) introduced Colorado’s comprehensive AI bill (CO SB 205) last month, mirroring the language of Senator Maroney’s (D) proposal in Connecticut (CT SB 2). Senator Rodriguez indicated that AI regulation was “badly needed,” adding that his proposal would “establish foundational guardrails for developers utilizing high-risk AI systems with the goal of reducing algorithmic discrimination and creating a safer user experience for consumers.”
Notably, Colorado’s Senator Rodriguez co-chaired a multi-state working group on AI policy last year with Connecticut Sen. James Maroney (D). Colorado lawmakers amended SB 205 earlier this month to narrow its scope, removing provisions regulating general-purpose AI models and requiring marking and disclosing synthetic content. The version that passed the legislature this week is limited to regulating “high-risk artificial intelligence systems” that make “consequential decisions” of “significant effect” on a list of consumer-facing industries.
Importantly, obligations on developers and deployers would not begin until February 1, 2026, and would require the use of “reasonable care” to protect consumers from any known or “reasonably foreseeable” risks of algorithmic discrimination. Developers would be required to make certain disclosures and documentation available to deployers, the attorney general, and the public. Deployers would be required to implement a risk management policy and program and complete an impact assessment of the system. By placing an effective date over a year and a half away, lawmakers will have time to amend the statute to adjust for technological changes between now and when the AI mandates go into effect.
Under the bill, deployers must notify consumers when “high-risk” AI systems are a substantial factor in consequential decisions, make certain disclosures, provide an opportunity to opt out of having personal data processed for profiling, provide an opportunity to correct data, and provide an opportunity to appeal an adverse decision and allow for human review. Deployers with under 50 employees are exempt from these requirements. AI systems that interact with consumers must disclose that the interaction is with an AI system. The law would only be enforceable by the attorney general (i.e., no private right of action), and companies can be provided a right to cure any violations.
The themes in this legislation -- transparency, bias reporting, opt outs, and an emphasis on consumer protections -- are similar to what we’ve seen in the hundreds of bills introduced by state lawmakers this year. But the Colorado bill would be the broadest such proposal to become law if signed by Governor Polis (D). The governor, a former tech executive, has not yet commented on the current version of the bill but issued a non-committal statement on the measure back in April.
Industry groups have criticized the Colorado bill for burdening companies with potential liability and, they argued, it would stifle a technology that is still getting off the ground. Consumer groups also criticized the proposal for not going far enough, arguing that it contains a loophole that exempts trade secrets from disclosure requirements.
Lack of gubernatorial support ultimately doomed a similar measure in Connecticut. The Senate easily passed SB 2, but House leaders refused to bring the bill up without support from Governor Ned Lamont (D). The bill was scaled back to address some industry concerns, but negotiations this week were unsuccessful as the legislative session adjourned for the year and Lamont refused to lift the threat of a veto. Connecticut Senator Maroney has vowed to return with a “better bill” and “bigger coalition” next year.
Recent Developments
In the News
Google’s AlphaFold 3: On Thursday, Google DeepMind announced AlphaFold 3, an AI model that can predict the structure and interactions of all life’s molecules with unprecedented accuracy. The team hopes that AlphaFold 3 will help transform our understanding of the biological world and drug discovery.
Major Policy Action
Tennessee: Last Friday, Gov. Lee (R) signed a sexual deepfake bill (TN HB 2163) into law. The new law specifies that for the purposes of sexual exploitation of children offenses, the term "material" includes computer-generated images created, adapted, or modified by artificial intelligence. The law goes into effect on July 1, 2024.
Connecticut: On Wednesday, the legislative session ended with the Senate’s signature AI bill failing to gain approval in the House after easily passing the Senate. The bill was essentially stopped once Gov. Lamont (D) issued a veto threat on Tuesday.
Colorado: On Wednesday, the House passed and Senate concurred with amendments to lawmakers’ comprehensive AI bill (CO SB 205), sending it to Gov. Polis (D) for his signature to become law. The bill would place certain notification and reasonable care requirements on AI developers and deployers, however, if signed into law, the major provisions will not go into effect until 2026.
Delaware: On Tuesday, the House passed a sexual deepfake bill (DE HB 353), sending it to the Senate for consideration. The bill would create civil and criminal remedies for the wrongful disclosure of deepfakes that depict individuals in the nude or engaging in sexual conduct.
Notable Proposals
New York: Last Friday, Assembly Member Clyde Vanel (D) introduced a bill (NY AB 10103) that would require the owner, licensee, or operator of a generative AI system to conspicuously display a warning on the system's user interface that is reasonably calculated to consistently apprise the user that the outputs of the generative AI system may be inaccurate. This bill is a response to an AI system falsely claiming that certain New York lawmakers had been accused of sexual harassment.