Taking the Pulse of State-Level AI Health Care Regulation

Key highlights this week:

  • We’re tracking 629 bills in 46 states related to AI during the 2025 legislative session. 

  • Democrats in Georgia and Nevada introduced broad consumer protection AI legislation requiring regulatory impact assessments. 

  • Virginia joins Texas and New York in banning the Chinese AI app DeepSeek on state-issued computers and devices.

  • AI study committees in Colorado and Arkansas have published their recommendations in new reports. 

When proponents talk about the benefits of AI, one area of focus for the future of humanity is health care. Over time, it is hoped that AI can lead to breakthroughs in diagnosing diseases, discovering new drugs, and detecting cancer. Prior to that advancement coming to fruition, the messaging from lawmakers has been clear: we need to ensure consumer privacy and quality of care is protected and this advancement has to be done in a deliberative mechanism to mitigate disruption to health care delivery. This year, lawmakers in several states have introduced legislation addressing AI and its use in the health care industry, particularly for health insurers. 

Lawmakers in many states have already introduced legislation that specifically addresses the use of AI in health care. Of the 629 AI-related bills we’re tracking in 2025, 35 of these are directly related to health care. While not many of them made it across the finish line, we expect that to change in 2025. Broader consumer protection bills, such as Colorado’s SB 205 that was enacted last year, include health care decisions in their list of “consequential decisions” that trigger a series of requirements for developers and deployers using AI tools in health settings. But lawmakers have also zeroed in on health care as a specific industry to address when it comes to AI with more targeted legislation. 

Last year, California was the most active on health-related AI laws. The Golden State enacted two such laws last year. The first law (CA SB 1120) requires a health insurer that uses AI tools for utilization review or management decisions to comply with requirements pertaining to the approval, modification, or denial of services. The second law (CA AB 3030) requires a health facility that uses generative AI to generate patient communications to ensure that those communications include a disclaimer that the communication was generated with AI and include instructions on how a patient may contact a human provider.

This year, lawmakers are considering legislation to have state insurance departments regulate how health insurers use AI. Illinois is debating a proposal called the AI Use in Health Insurance Act (IL SB 1425), which would grant the Department of Insurance oversight of an insurer’s use of AI to make or support determinations that negatively impact consumers. In Maryland, lawmakers are considering a measure (MD HB 697) that would require insurers to submit quarterly reports on their use of AI  to the Maryland Insurance Commissioner. Another measure in Maryland (MD SB 987) would require anyone distributing or operating AI health software to register with the Maryland Health Care Commission.

Many bills this year have focused on how AI is used in the utilization review process. In New York, lawmakers are considering a bill (NY AB 3991) that would require health insurers who use AI for utilization review to only use AI that bases its decision on an individual's medical history and individual clinical circumstances. Tennessee (TN SB 1261/HB 1382) and Massachusetts (MA SD 268) are considering similar legislation.

Some states are considering legislation that goes further by prohibiting the use of AI to deny claims or coverage. In Texas, lawmakers are considering a bill (TX SB 815) that would prohibit using AI as the sole basis to deny, delay, or modify care. Lawmakers are debating a similar bill in Arizona (AZ HB 2175), which would prohibit the use of AI to deny claims or prior authorization. However, unlike the Texas bill, the Arizona proposal does not address how or if AI can be used to modify claims. 

Lawmakers in New York (NY AB 1456) and Massachusetts (MA HD 3750) have introduced legislation that would require health insurers to disclose that AI is being used in the utilization review process. New York Assemblymember Pamela Hunter (D) is the lead sponsor of the New York bill and is also the current President of the National Conference of Insurance Legislators (NCOIL). Assemblymember Hunter has prioritized the development of model legislation regulating AI in health care during her tenure of leadership of the influential legislator group. In Arkansas, legislators are considering a bill (AR HB 1297) that would require health insurers to disclose the strengths and limitations of AI used in the utilization review process and also prohibits using an algorithm as the sole basis to determine if care should be denied, delayed, or modified.

Transparency is the low-hanging fruit of AI regulation, so it’s no surprise that disclosures are a big part of health care-related AI bills. In Illinois, lawmakers are debating a bill (IL SB 2259) that would require disclosure when generative AI is used for written or verbal patient communications and to provide instructions on how patients can communicate with a human. Indiana is considering similar legislation (IN HB 1620) that would require health care providers to disclose the use of any AI being used to generate communications with patients regarding their care and when AI is used to make or inform decisions regarding their care. These are modeled on the law enacted by California lawmakers last year (CA AB 3030).

With the sensitivity of health care balanced with the bright promises of AI technology, the health care industry is likely to face new laws and regulations regarding AI, either through standalone AI health care legislation or in AI comprehensive consumer protection legislation. 

Recent Developments

In the News

  • AI Action Summit: At an international AI summit in Paris this week, Vice President J.D. Vance warned leaders against “excessive regulation” of the technology, highlighting the divide between the U.S. free market approach and the European regulatory approach. The United States and the United Kingdom refused to sign a pledge for responsible AI development signed by 60 other nations, including China.

Major Policy Action 

  • Virginia: On Wednesday, the Senate approved a synthetic media fraud and defamation bill (VA HB 2124) with amendments. The House had already passed the bill, but it’ll need to approve the Senate’s amendments before the bill is sent to Gov. Youngkin (R) for his signature to become law. And on Tuesday, Gov. Youngkin signed an executive order banning the Chinese AI app DeepSeek on state-issued computers and devices. New York and Texas have already banned the app on state devices. 

  • Colorado: This month, after meeting for over a year, the Artificial Intelligence Impact Task Force released a report with recommendations on how to amend the law passed last year (CO SB 205) that is set to go into effect in 2026. Among the changes that have achievable consensus are clarifying the type of decisions that qualify as "consequential decisions" under the law, amending the exemption list, changes to the documentation developers must give deployers, and amendments to the timing and triggering of impact assessment.

  • Arkansas: Last week, the AI and Analytics Center of Excellence issued a report to Governor Sarah Huckabee Sanders (R) recommending policies and programs to adopt AI across state government. The report recommends protecting sensitive data, protecting against AI misuse, establishing AI governance, securing AI infrastructure, developing an AI-ready workforce, and building employer-aligned AI talent pipelines.

  • Oregon: Last week, Gov. Tina Kotek (D) unveiled the State Government Artificial Intelligence (AI) Advisory Council Final Recommended Action Plan to guide the use of AI in state government. The Council has met for the last 11 months and recommends a framework that includes human-in-the-loop oversight, addresses privacy concerns, develops incident response protocols and risk management strategies, requires auditing and testing, and addresses workforce needs.

Notable Proposals 

  • California: Artificial intelligence would be prohibited from impersonating healthcare professionals under a bill (CA AB 489) introduced by Assemblywoman Mia Bonta (D). The bill would prohibit a chatbot from implying healthcare advice is provided by a licensed natural person and is supported by medical professionals.

  • Georgia: Senate Democrats unveiled a bill (GA SB 167) to regulate automated decision-making, requiring certain documentation and disclosures by developers and deployers. The Georgia Senate Committee on Artificial Intelligence issued a report last December that could result in legislation introduced this session. 

  • Idaho: A bill (ID SB 1067) introduced this week by the Senate Commerce and Human Resources Committee would prohibit the government from enacting any law that would constrain the development, training, or use of artificial intelligence, deployment in commercial applications, and consumer use of AI technologies. It would also prohibit the government from regulating algorithms and automated decision-making processes. 

  • Nevada: Senator Dina Neal (D) introduced a comprehensive AI bill (NV SB 199) on Monday that would require registration of an artificial intelligence company and a self-assessment twice a year. The bill enumerates several prohibited uses for AI including using the technology to replace teachers, write police reports, and set housing rental prices using competitor data.

  • New Mexico: Representative Linda Serrato (D) introduced the AI Synthetic Content Accountability Act (NM HB 401) on Tuesday which requires a synthetic content provider to place an imperceptible watermark into the content generated. The measure would also require content providers to make available at no cost a watermark decoder that allows a user to assess the provenance of a single piece of content.

Previous
Previous

The Return of Connecticut’s SB 2: Algorithmic Discrimination

Next
Next

Virginia Moves Legislative Framework for High-Risk AI Systems