Texas Proposal Targets AI Developers, Deployers, and Distributors

Key highlights this week:

  • We’re currently tracking 692 bills in 45 states related to AI this year, 111 of which have been enacted into law.

  • Gov. Shapiro made Pennsylvania the 31st state to enact a sexual deepfake law after signing this session’s bill into law this week. 

  • Policymakers in Colorado hear directly from the businesses that their first-in-the-nation comprehensive AI bill will effect when it goes into effect in 2026. 

  • New Jersey is the latest state to debate legislation to prohibit landlords from using algorithmic software to set rent prices.

This week, Texas Rep. Giovanni Capriglione (R) unveiled a draft of his anticipated artificial intelligence legislative proposal for next year’s legislative session, as reported by Austin Jenkins at Pluribus News. Capriglione has tried to position the bill as a more business-friendly model than the law enacted by Colorado and legislation proposed in Connecticut. In fact, this bill is likely a preview of the type of model legislation we’ll see introduced across a dozen states next year developed by a bipartisan group of 200 state lawmakers from 45 states. The Texas draft and similar measures could impose some novel obligations to the AI industry and anyone looking to use or distribute AI tools that go beyond prior proposals. 

The proposed Texas Responsible AI Governance Act focuses on “high-risk” artificial intelligence systems, defined as systems that are a contributing factor in making a consequential decision. Like the law in Colorado (CO SB 205), and the proposal in Connecticut (CT SB 2), the Texas draft would impose obligations on developers and deployers of AI systems that require “reasonable care” to protect consumers against foreseeable risks of algorithmic discrimination. It also requires developers to provide certain information to deployers and complete regular impact assessments.

Notably, the draft bill defines “consequential decisions” that would trigger the bill’s requirements to include any decision that has a material legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of education enrollment or an education opportunity; employment or an employment opportunity; a financial service; an essential government service; electricity services; food; a health-care service; housing;  insurance; a legal service; a transportation service; surveillance or monitoring systems; water; or elections.

The Texas proposal would also impose obligations to distributors — those that make an AI system available in the market. Distributors must also take reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination and must withdraw the system from the market upon detection of such discrimination. Distributors, or any third party, would have the same obligations as developers if they put their name or trademark on the system, modify the system, or modify the intended purpose of the system. There would also be obligations on digital service providers to make a commercially reasonable effort to prevent advertisers on the service from deploying a high-risk AI system that could expose users to discrimination.

The bill would require disclosures to consumers before any interaction with a high-risk artificial intelligence system that is a contributing factor in making a consequential decision. Consumers would have the right to appeal an adverse consequential decision made by such a system, regardless of whether or not there was human oversight, with the right to have a "clear and meaningful explanation" of the role of the system in the decision-making procedure. 

Rep. Capriglione’s proposal would prohibit certain uses by AI systems, including:

  • Using subliminal techniques with the objective of materially distorting the behavior of a person or group by impairing their ability to make an informed decision;

  • Social scoring — evaluating or classifying people based on social or predictive behavior;

  • Capturing biometric identifiers from a consumer;

  • Inferring or interpreting sensitive personal attributes using biometric identifiers;

  • Utilizing personal attributes for harm;

  • Emotion recognition of consumers without consent; or

  • Producing child or nonconsensual sexual content.

A developer who discovers or is made aware that the system is engaging in any prohibited acts must cease operation of the system as soon as technically feasible.

Specifically prohibiting certain acts provides a narrower approach that focuses on perceived harms, rather than requiring a broad regulatory regime like Califonia lawmakers considered (CA SB 1047). However, the bill could become quite broad in application by requiring obligations for parties beyond just developers and deployers. While California Governor Newsom (D) vetoed SB 1047 by stating concerns it could negatively affect smaller technology startups, the Texas measure could face similar criticisms. 

The Texas proposal would provide consumer data protections, amending the state’s comprehensive privacy law to give consumers a right to know if their personal data is or will be used in any AI system and for what purposes. AI systems would have to meet certain data security standards for the collection and storage of personal data. 

Rep. Capriglione's proposal would establish a grant program to develop educational and worker training programs in AI, create a Texas AI Council to advise state agencies on AI use, and create a regulatory sandbox for AI startups. The measure, which would go into effect on September 1, 2025, would be enforceable by the attorney general with a 30-day right-to-cure period but also allow private lawsuits for declaratory and injunctive relief.

Business-friendly Texas may not seem like a likely candidate for a landmark regulatory bill. But Lone Star State lawmakers have been aggressive in addressing concerns with the tech industry, passing a law protecting biometric information as well as a comprehensive consumer privacy law. Attorney General Ken Paxton (R) has also targeted tech firms, recently settling a landmark case against an AI company selling a generative AI healthcare product. 

On the other hand, Texas is one of the top states for AI jobs with Austin as one of the top markets. Rep. Capriglione has served on several multi-state study groups on AI along with Connecticut Sen. James Maroney (D), whose AI legislation (CT SB 2) stalled out last session when Gov. Ned Lamont (D) expressed reservations. Like Sen. Maroney, Rep. Capriglione will likely face resistance from lawmakers concerned that AI regulations could inhibit innovation and scare away business from the state. 


Recent Developments

In the News

Major Policy Action 

  • Pennsylvania: On Tuesday, Gov. Josh Shapiro signed a sexual deepfake bill (PA SB 1213) into law. The law adds deepfake content to existing laws on child pornography and nonconsensual sexual content. Passage of the measure was motivated by an investigation this summer into the dissemination of sexual deepfake images involving students at a local high school.

  • Colorado: The Artificial Intelligence Impact Task Force met last week to consider changes to the landmark AI regulation bill enacted earlier this year (CO SB 205). Bigger tech companies largely supported the current law, suggesting targeted revisions for high-risk use cases, while smaller companies expressed frustration about unclear provisions.

  • New Jersey: Last week, the Assembly Housing Committee advanced a bill (NJ AB 4872) that would prohibit landlords from using algorithmic software to set rent prices and reduce competition. Lawmakers in Colorado, New York, and Rhode Island considered similar legislation earlier this year. The Federal Trade Commission issued guidance earlier this year clarifying that certain algorithm use in the rental market could constitute illegal price fixing, and the U.S. Department of Justice is currently suing a major algorithmic rental pricing software provider for violating antitrust laws. 

  • New York: Assemblymember Alex Bores (D) plans to introduce a package of bills next session to require labeling of deepfake content, using the global provenance standard known as C2PA. His proposals would require social media companies to preserve C2PA metadata for uploaded content, require government agencies to use C2PA authentication on media, require political communications to include such authentication, and require generative AI content generators to tag images with C2PA. 

Previous
Previous

State AI Policy: 2025 Preview

Next
Next

States Steer Autonomous Vehicle Legislation