Texas AI Bill 2.0: Private Sector Gets a Reprieve
Key highlights this week:
We’re tracking 902 bills in 48 states related to AI during the 2025 legislative session.
California Gov. Newsom’s Joint California Policy Working Group on AI issued its draft report.
Lawmakers in Kentucky send a bill to the governor that would regulate the use of AI by state government.
And there’s a new AI Task Force in Mississippi.
Last Friday, Texas Rep. Giovanni Capriglione (R) filed a new version of his high-profile algorithmic discrimination legislation. The new version removes many of the requirements placed on private sector developers, deployers, and distributors of AI that were a major focus of the original bill.
Rep. Capriglione’s new bill (TX HB 149) shares the same name — the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) — but it differs substantially from his original TRAIGA (TX HB 1709). The original TRAIGA placed several requirements on developers, deployers, and distributors of “high-risk” AI systems. The new TRAIGA does not make any reference to “high-risk” AI systems and imposes very few requirements on private sector development and deployment of AI.
Restricting Many Mondates to Public Sector Only
The new TRAIGA does not include any of the substantive provisions establishing requirements for developers, deployers, and distributors of AI found in the original bill. However, it does place requirements on the government’s use of AI. The new TRAIGA requires government agencies to disclose to consumers when they are interacting with an AI before any interaction takes place. This provision is similar to one in the original TRAIGA, however, the new disclosure requirement only applies to government use of AI whereas the original TRAIGA required all developers and deployers of high-risk AI systems to inform consumers when they were interacting with an AI.
The original TRAIGA prohibited all AI that engaged in “social scoring,” which is a practice that seeks to use observations about an individual to assign them a social valuation or “score.” The new TRAIGA only prohibits the government from using AI for social scoring. This provision addresses concerns that have been raised about the potential creation of a social credit score similar to the social credit score in China that can reportedly impact various aspects of a person’s life. This is a relatively new topic of consumer protection AI bills this year, with only New York (NY SB 1169) and Oklahoma (OK HB 1916) considering legislation with similar provisions.
Another major narrowing of scope in the new bill is using AI systems with biometric identifiers to uniquely identify individuals. The original bill prohibited any entity from deploying AI that could identify specific people if the AI was developed using biometric identifiers. However, the new version only prohibits government entities from using AI developed with biometric identifiers for the purpose of identifying specific individuals.
Dropping Impact Assessments
Another notable omission in the new TRAIGA is the impact assessment mandates for the private sector. Impact assessments for developers and deployers of “high-risk” AI use have been a key feature of the two dozen or so algorithmic discrimination bills introduced across the country and developed within a multi-state AI working group led by Connecticut Sen. Maroney (D). Texas Rep. Capriglione has worked closely with this group, and his original TRAIGA proposal followed many of the key features of Sen. Maroney’s own CT SB 2.
Instead of relying on impact assessments for “high risk” AI use in a broad array of industries, the new TRAIGA prohibits developing or deploying any AI system with the intent to unlawfully discriminate against a protected class. The intent requirement is important here. And according to the legislation, a disparate impact alone is not sufficient to show an intent to discriminate. The omission of impact assessments presents an alternative framework for AI regulation that relies on existing laws to address discriminatory impacts by AI. While the new TRAIGA addresses concerns about AI’s potential to discriminate, it does not place requirements on any party to actively ensure that AI is not producing discriminatory effects.
Targeting Specific Use Cases
Instead of using a broad regulatory approach that places the onus on those developing and deploying AI across the “high-risk” AI use cases, the new TRAIGA focuses on specific use cases of potential algorithmic discrimination. For example, the bill would prohibit creating AI that encourages or incites someone to harm themself or others, would prohibit using AI to create unauthorized sexual deepfakes, and would allow consumers to appeal decisions made by AI that have an adverse impact on their health, welfare, safety, or fundamental rights.
The new version of TRAIGA also would prohibit any entity from developing or deploying AI that engages in “political viewpoint discrimination” by blocking, banning, demonetizing, or taking similar other actions to censor an individual’s political speech. This has been a popular rallying cry for conservatives that have argued that their voices have been censored on social media.
A Reduced AI Council
Similar to the original, the new TRAIGA creates an AI Council to evaluate AI systems, provide guidance, and monitor a regulatory sandbox program. However, the AI Council in the new bill cannot issue any rules, binding guidance, or anything that could be construed as regulatory guidance to any entity or agency. In the original bill, the Council was given rulemaking authority to allow them to request advisory opinions, create standards for ethical AI development and deployment, and establish guidelines to evaluate the safety of AI systems.
While the new TRAIGA does contain substantial changes, some provisions of the original remain virtually untouched. Enforcement for violations can still only be done by the attorney general’s office and does not create a private right of action for consumers, even for AI deployed and used by the government. Additionally, provisions for the AI regulatory sandbox are also virtually identical to the original.
With the bulk of provisions imposing requirements on developers, deployers, and distributors of AI removed, the new version of TRAIGA is likely to draw more support from the business community than the original. However, whether these changes are enough to garner enough support for TRAIGA to become law remains to be seen.
Recent Developments
Major Policy Action
California: Gov. Gavin Newsom’s Joint California Policy Working Group on AI issued its draft report on Tuesday emphasizing transparency with independent third-party verification, adverse incident reporting, and whistleblower protections. The Working Group is soliciting further feedback until April 8 with a final report expected in June. In a statement, Sen. Scott Wiener (D) said he is considering incorporating recommendations in his AI legislation.
Kentucky: The General Assembly passed a bill (KY SB 4), sending it to the governor for his signature to become law, which would regulate the use of AI by state government, requiring disclosure to those affected by a decision made with AI, requiring risk management policies, and creating an AI Governance Committee. The original bill also included a ban on political deepfakes that was stripped in an amendment in committee.
Mississippi: Gov. Tate Reeves (R) signed a bill (MS SB 2426) into law that creates the Artificial Intelligence Regulation (Air) Task Force. The task force is directed to report findings and recommendations on December 1 each year until it dissolves on December 31, 2027.
New York: In an interview this week, Asm. Alex Bores (D) suggested an AI bill introduced back in January by Sen. Kristen Gonzalez (D) (NY S 1169) “will probably move quicker this session” than his New York Artificial Intelligence Consumer Protection Act (NY A 768). The Senate bill would require independent audits of high-risk AI systems, require disclosures to consumers about decisions made by AI, and make the developer legally responsible for the quality and accuracy of decisions made by AI.
Notable Proposals
Minnesota: House Democrats introduced a bill on Monday (MN HF 2452) to address the use of AI in dynamic pricing. The proposal prohibits a person from using AI to adjust, fix, or control product prices in real time based on market demands, competitor prices, inventory levels, customer behavior, or other factors.
New York: Last week Asm. Clyde Vanel (D) introduced a measure (NY A 6767) that would regulate AI companion operators, requiring warnings at the outset of interactions and every three hours, and protocols to address suicidal thoughts or expressions of self-harm or harm to others by the user. A Florida mother is suing an AI company for the suicide of her 14-year-old son after interactions with character chatbots.
Oregon: Rep. Darcey Edwards (R) introduced a bill (OR HB 3936) to prohibit AI software developed from a foreign country from being downloaded or used on state-issued devices, in response to the release of the Chinese-based AI app DeepSeek. Iowa, New York, South Dakota, Texas, and Virginia have already banned the app through executive order, and a group of 21 attorneys general has urged a ban on the app on government devices nationwide.
Texas: Sen. Nathan Johnson introduced two bills this week that would target the use of bots in misleading interactions. TX SB 2637 would require a social media platform to label content posted by bot accounts with a warning it may contain misinformation. TX SB 2638 would prohibit using a bot account to mislead someone into entering a commercial transaction by pretending to be someone else.