Virginia Moves Legislative Framework for High-Risk AI Systems
Key highlights this week:
We’re tracking 575 bills in 45 states for the 2025 legislative sessions.
State governments begin restricting the use of the Chinese AI models from DeepSeek for state employees.
Lawmakers in North Dakota’s House passed two sexual deepfake bills.
Finally, Virginia lawmakers passed a handful of bills out of their chamber of origins before the crossover deadline this week, including a high-risk consumer protection bill that is the subject of our deep dive this week.
Unlike legislatures in 48 other states, the Virginia General Assembly has entered the second year of its two-year legislative session and will wrap up business in a few short weeks as lawmakers prepare for elections this fall. Unsurprisingly, Virginia lawmakers have wasted no time moving major AI legislation, with two major bills passing their chambers of origin this week.
Trending legislation we’re keeping a close eye on this year are the “comprehensive consumer protection” AI bills or “high-risk decision-making” bills that have been introduced in about a dozen states so far. These bills are based on last year’s Colorado SB 205 and Connecticut SB 2 and aim to protect consumers by requiring transparency and thorough evaluation of potential bias when using an AI tool to make a consequential decision that affects consumers. We’re currently tracking 17 bills that fall in line with these high-risk consumer protection AI models in 10 states.
In Virginia, there are two of these bills working their way through the legislative process sponsored by Del. Michelle Maldonado (D) in the House and Sen. Lashrecse Aird (D) in the Senate. Del. Maldonado’s bill (VA HB 2094) would require the same key features we’ve seen in other versions of these bills across the country, including:
Requiring that developers and deployers of a high-risk artificial intelligence system use a reasonable duty of care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination;
Prohibiting a deployer from having a high-risk artificial intelligence system to make a consequential decision without implementing a risk management policy and program and conducting an impact assessment; and
Mandating that a deployer discloses to a consumer upon an interaction with a high-risk artificial intelligence system.
The legislation includes a rebuttable presumption that the reasonable duty of care standard has been met if the developer complies with the documentary requirements outlined in the bill. Additional provisions of Del. Maldonado’s bill would:
Requires certain disclosures and documentation to a deployer including known limitations and reasonably foreseeable risks and measures to mitigate risks of discrimination.
Requires outputs of a high-risk artificial intelligence system to be marked and detectable, in a manner that is detectable by consumers.
Requires deployers to make certain disclosures upon a decision, allow consumers to correct data, and give an opportunity to appeal adverse consequential decisions.
Notably, the bill grants the state attorney general the sole authority to enforce the bill, which means there will not be a private right of action for private citizens of the state to file a lawsuit when their rights under the bill are violated. The original version of the bill addressed responsibilities of “integrators” of high-risk AI systems, which were defined as “a person that knowingly integrates an artificial intelligence system into a software application and places such software application on the market.” However, references to “integrators” were removed in subsequent versions of the bill.
On Tuesday, the Virginia House passed Del. Maldonado’s bill on a narrow 51-47 vote, sending the legislation to the Senate for its consideration. Tuesday was also the cross-over deadline for a bill to pass its chamber of origin, so the bill got out of the House with no time to spare. The Senate will need to work quickly — especially if senators plan to make amendments to the bill — because the Virginia Legislature is scheduled to adjourn for the year on Feb. 22.
The Senate sent its own AI bill to the House on Tuesday as well. But the key difference is that while Del. Maldonado’s bill applies to private businesses and entities doing business in Virginia, Sen. Aird’s legislation (VA SB 1214) focuses specifically on the deployment of AI tools by public bodies and government agencies. So for a majority of our audience, the House bill is the one to keep close tabs on.
Virginia Del. Maldonado is a member and on the steering committee of the multi-state AI working group organized by Connecticut Sen. James Maroney (D) and the Future of Privacy Forum. “I think it’s really critical for us to create guardrails and frameworks that are flexible and breathable, so that we don’t stifle innovation and creativity,” Del. Maldonado said. “But we also (should) keep in mind what it means when we have this kind of technology that can take so much data and put it into training datasets for large language models and other things without us even knowing. We should understand how our data is being used.”
The big political question is whether this legislation, moved by slim Democratic majorities in the legislature, could see a signature from Republican Governor Glenn Youngkin. When announcing an AI Task Force via executive order last year, Gov. Youngkin said that “while there are amazing opportunities with AI, there are also inherent risks that we must tackle head-on.” But the governor has stopped short of publicly supporting regulations on AI as extensive as what lawmakers have moved through the legislative process.
Recent Developments
In the News
OpenAI’s 03-mini: In the endless stream of new model releases, OpenAI released their latest reasoning model, “03-mini” in a slimmer and speedier design. OpenAI also released an impressive new agent model called “Deep Research” that uses reasoning to synthesize large amounts of online information and complete multi-step research tasks.
Major Policy Action
Federal: The White House Office of Science and Technology Policy is seeking input on a new AI Action Plan to replace President Biden’s executive order on AI. Responses can address any relevant AI policy topic and are due by March 15.
Texas: Gov. Greg Abbott (R) issued a proclamation banning the use of Chinese-backed artificial intelligence and social media apps on Texas government-issued devices. The order adds DeepSeek, the Chinese AI model unveiled last month, to the state’s prohibited technologies list.
North Dakota: On Monday, lawmakers in the House passed two sexual deepfake bills, one criminalizing non-consensual sexual deepfakes (ND HB 1351) and another criminalizing AI produced child sexual material (ND HB 1386), sending both bills to the Senate for it’s approval.
Virginia: Lawmakers passed a handful of bills out of their chamber of origins before the crossover deadline this week. In addition to the two bills described above, this includes a bill punishing threats, slander, and fraud using synthetic media (VA SB 2124), creating property rights in authorized digital replicas (VA HB 2462), and electoral deepfake legislation (VA HB 2479).
Notable Proposals
Connecticut: A new Senate bill (CT SB 1249) includes recommendations from the governor’s office on artificial intelligence. The measure would create an artificial intelligence regulatory sandbox and an investment fund to develop and recruit AI businesses to the state. The proposal also clarifies that use of AI is not a defense to legal claims from an unfair or deceptive act. Gov. Ned Lamont (D) threatened to veto an AI regulatory bill last year over concerns it would deter business.
Kentucky: The legislative session began on Tuesday, and Republican leaders expect to introduce legislation on artificial intelligence to protect people from having information falsified. Sen. Reggie Thomas (D) also expects to address energy demands needed for artificial intelligence systems.
Maryland: Sen. Katie Hester introduced an AI regulation bill (MD SB 936) modeled after Colorado’s law, which includes a private right of action if an individual has been subject to discrimination as result of a consequential decision made by a high-risk AI system. Sen. Hester served on the steering committee of the multi-state AI working group of state lawmakers and has already introduced several AI bills this session.
Minnesota: A proposed bill (MN SF 1117) from Senator Erin K. Maye Quade (D) would require a report on the environmental impacts of AI. Last year she co-sponsored deepfake legislation and this year the senator has expressed support for a proposal to prohibit the use of AI in hiring decisions.
Pennsylvania: Republican lawmaker Craig Williams introduced a bill (PA HB 518) that would make business entities liable for representations made by AI to consumers. Williams indicates he was motivated to introduce the bill by a story about a chatbot deployed by a Canadian airline that misled a customer with wrong advice, causing him to buy a full-price ticket.