States Shield Consumers from AI: Is Your Industry Next?
It was a relatively slow week in AI legislation in the states after a handful of whirlwind weeks to start the year. Some key highlights:
West Virginia and Texas add an AI task force and legislative committee, respectively. So far, 27 states have officially established a group to study AI policy.
In a first-of-its-kind ruling, a state judge in Washington barred the use as evidence in a criminal case of an AI-enhanced video.
We’re currently tracking 571 state bills (plus another 94 congressional bills) related to AI this year.
More legislative movement for deepfake bills in Missouri, Hawaii, and Mississippi.
This session, beyond their focus on combating deepfakes or comprehensively regulating the development of AI models, state lawmakers have introduced dozens of bills aimed specifically at AI use in particular industries. Last week we discussed how lawmakers are looking to protect certain industries from AI competition (e.g., music, movies, fashion), but this week we’ll examine legislation aimed at protecting the customers of specific industries — such as insurance, health care, legal, and housing — from AI use.
One industry of concern for state lawmakers is insurance. In New Jersey, lawmakers introduced a bill (NJ AB 3858) that would require insurance carriers to disclose on their websites whether they use an automated utilization management system and how many claims were reviewed using the automated system the previous year. In Oklahoma, the AI Utilization Review Act (OK HB 3577) would require health insurance providers to disclose if AI will be used in the utilization review process and would require health insurers to annually submit the data that AI was trained on to the Oklahoma Insurance Department. Note that the key feature of both of these bills is transparency, consistent with what we’ve seen across the AI regulatory landscape this year.
In addition to regulating how insurers can use AI themselves, lawmakers are also contemplating mandates that insurers cover AI use in health care. In Rhode Island, lawmakers are debating a bill (RI HB 8073) that would require health insurance plans to provide coverage for AI technology that is used to analyze breast tissue diagnostic imaging.
Much of the promise of AI technology centers around the health care industry. Both Georgia (GA HB 887) and Louisiana (LA HB 916) introduced bills that would prohibit health care entities, among others, from making decisions regarding patient care based solely on results produced by AI systems. The Louisiana bill goes a step further and would require that health care professionals review any decision made with the use of AI and allow them to override that decision. In Illinois, lawmakers are considering a bill (IL HB 5649) that would prohibit mental health professionals from providing services through AI without first obtaining consent from the patient.
The legal field was one of the first to have some widely publicized mishaps with AI. In response, Washington lawmakers introduced a bill (WA SB 6073) that would require attorneys who use generative AI to conduct legal research or to draft documents filed with a state court to disclose that AI was used and certify that every citation has been verified as accurate. Additionally, lawmakers in California introduced a bill (CA AB 2811) that would require attorneys to execute and maintain an affidavit for seven years certifying whether generative AI was used when drafting documents filed by an attorney in a state or federal court.
Finally, housing policy is a hot topic in statehouses this year. Legislation in New York (NY SB 7735/AB 7906) would require landlords using AI to screen applicants to provide notice that AI is being used, and if an applicant is denied by the AI, the landlord must provide the reasons for the denial. Additionally, this bill would require landlords to annually conduct a disparate impact analysis to ensure the AI is not producing biased and discriminatory results. Lawmakers in Rhode Island are considering a bill (RI HB 8058) that would prohibit a landlord from using AI to determine the rent for a residential property or from using or sharing information with the intent to stabilize or increase rental prices among units owned by different entities. Currently, the Attorney General’s Office in the District of Columbia is suing a technology firm that offers services to property managers, arguing that many of the largest landlords in the district used the software to collude and artificially inflate rental prices for residential units.
These industry-specific bills are only a sampling and we expect to see many more of this type of legislation targeting any industry where lawmakers think that AI use could potentially affect consumers. While a majority of these bills are unlikely to become law this year, they serve as a starting point for future legislation and indicate how lawmakers are thinking about AI regulation at this early stage, in these industries and potentially in others.
Recent Developments
In the News
FDA Approves AI Test: On Wednesday, the U.S. Food and Drug Administration approved an AI tool to predict the risk of sepsis, a condition that contributes to at least 350,000 American deaths each year. It is the first algorithmic, AI-driven diagnostic tool for sepsis to receive the FDA’s go-ahead.
Major Policy Action
West Virginia: Last week, Gov. Justice (R) signed an AI study bill (WV HB 5690) into law, which will create a West Virginia Task Force on Artificial Intelligence within the governor's office. The law goes into effect on June 2, 2024.
Washington: Last Friday, a state judge overseeing a triple murder case barred the use of an AI-enhanced video as evidence in a ruling that experts said may be the first of its kind in a U.S. criminal court. The judge described the technology as novel and said it relies on "opaque methods to represent what the AI model 'thinks' should be shown."
Texas: On Tuesday, House Speaker Dade Phelan (R) announced the creation of the House Select Committee on Artificial Intelligence & Emerging Technologies. The committee will submit an initial report no later than May 16, 2024, with a scope of studying the uses of AI in the public and private sectors, the impact on certain industry sectors, considering policies for responsible deployment of AI, and formulating legislative recommendations.
Missouri: The House passed a political deepfake bill (MO HB 2628), sending it to the Senate for its consideration. The bill would prohibit any person or entity from, within 90 days of an election, distributing a synthetic media message of any candidate or party for elective office who will appear on a state or local ballot without a disclaimer.
Hawaii: On Thursday, the House passed a deepfake bill (HI SB 2687). The Senate had previously passed the bill as well, but the House made amendments that the Senate will need to approve before sending the bill to the governor for his signature. The bill would prohibit a person from distributing materially deceptive media within a certain time frame unless the media contains a disclaimer.
Mississippi: On Thursday, the House passed a political deepfake bill (MS SB 2577). The Senate had previously passed the bill as well, but the House made amendments that the Senate will need to approve before sending the bill to the governor for his signature. The bill would criminalize the wrongful dissemination of deepfakes if it takes place within 90 days of an election, without consent, and with the intent of affecting an election.
Notable Proposals
Alaska: On Tuesday, the Senate State Affairs Committee introduced a bill (AK SB 262) that would create an AI task force in the Department of Commerce, Community, and Economic Development. The task force would develop annual reports through 2026, with recommendations for responsible growth of technology markets, the use of AI in state government, and AI regulation.
Louisiana: On Tuesday, Rep. Kellee Dickerson (R) proposed a bill (LA HB 916) that would prohibit healthcare decisions solely on the basis of AI and would require a healthcare professional to review and override decisions made solely by an AI. Some hospitals in the state are already using AI to answer patient questions to save time.