Summer School: Lawmakers Race to Catch Up with AI
Key highlights this week:
We’re currently tracking 694 bills in 45 states related to AI this year, 80 of which have been enacted into law.
California Gov. Newsome (D) signed into law seven of the 21 AI-related bills on his desk into law this week. The bills he signed target political deepfakes, sexual deepfakes, and unauthorized deepfakes of performers. The governor has until the end of the month to decide the fate of the remaining 14 bills lawmakers sent him this year.
Lawmakers in Nebraska held an AI committee meeting and the Texas AG secured a settlement with an AI healthcare company.
Although state lawmakers introduced over 600 bills relating to artificial intelligence this year, less than 15 percent of them became law. While legislators may have a lot of experience in other areas of public policy, artificial intelligence is such a new technology that many need time to educate themselves on the capabilities and risks. With most sessions complete for the year, many state lawmakers are using the summer and fall to study AI through special committees and task forces. In fact, 33 states have established groups specifically tasked to study AI or assigned AI to a standing committee. The issues they raise in public hearings could indicate how they plan to address the technology in the next legislative session.
Many of the early meetings for study committees involve learning just what AI is, how it is defined, and how it is being used, particularly by state government agencies. States are beginning to conduct inventories of AI use in government to understand risks and to develop best practices. The Alabama Generative Artificial Intelligence Task Force found that 26 of 130 state agencies responding to their survey use AI in some capacity. Indiana lawmakers learned that the state Office of Technology unveiled a chatbot for state government agency websites this summer.
Many lawmakers are interested in how AI can be used to improve the delivery of government services and allow them to cut costs. In Texas, the Attorney General’s office testified how AI was used to save time by sifting through lengthy child support cases. Indiana lawmakers discussed how AI could be used for email summarization, calendar review, AI-generated voice technology, video monitoring, document review, and language translations. Despite the benefits, some lawmakers still expressed concern about the risks of using AI to handle constituents' personal information, with fears about privacy and cybersecurity vulnerabilities.
AI can make it easier to do some jobs, but that may require less manpower. Lawmakers have also expressed concern over AI's impact on the workforce. At a hearing in Kentucky, Senator Reginald Thomas (D) remarked how AI is being used for phone inquiries and autonomous motor vehicles. He wondered if someday AI could replace lawyers, judges, and consulting services. But states are also eager to offer the education and training resources workers will need to be ready for the new jobs AI will create. Wisconsin Governor Tony Evers (D) created a Task Force on Workforce and Artificial Intelligence that released an action plan this summer that includes integrating AI in the K-12 education curriculum, offering credentialing and training to potential workers, and creating computing clusters to provide AI access to researchers and entrepreneurs.
When it comes to AI regulation, lawmakers often stress the desire to offer some guardrails to protect the public from critical harm but want to stop short of overregulating the industry. Some looked to guidance at what other states had done, particularly the enacted law in Colorado (CO SB 205) and the failed bill in Connecticut (CT SB 2). South Dakota lawmakers explored using Connecticut’s bill as a way to define the term “artificial intelligence” while Texas lawmakers questioned industry witnesses whether Colorado’s law could serve as a template. Many state lawmakers came together on their own to form a multi-state, bipartisan study group last year, spearheaded by Connecticut Senator Maroney (D), which meets every other week over video calls. New Mexico lawmakers invited legislators from other states to share their experiences in pushing for AI regulation.
But Alabama Secretary of Information Technology Daniel Urquhart warned lawmakers that “Colorado passed a very tough law that just is very punitive and it makes it hard to proceed with anything,” and to not be too rigid with a technology that was still evolving. Even Colorado lawmakers concede that their law will require amendments, with discussions taking place via the state’s Artificial Intelligence Task Force, before it goes into effect in February 2026.
Some lawmakers have expressed reluctance to be an outlier state, and are looking for some sort of consensus at the state level in the absence of federal action. “A lot of times, legislation tends to be adopted on the state level. There’s a blessing and a curse there,” said Georgia Sen. John Albers (R), who chairs the Senate Study Committee on Artificial Intelligence. “The good news is sometimes we can make things more Georgia-specific, where we like to find a unique balance. The bad news is sometimes you get 50 versions of something, which is not necessarily good for consumers or businesses.”
Other topics discussed by lawmakers include the role of AI in health care, how to mitigate the impact of the proliferation of deepfake content, protecting artists against the use of digital replicas, safeguarding consumer data from AI models, and protecting constituents against algorithmic bias. "It's so important that we have an opportunity to see how we can positively help people, and then also talk about some of the safeguards, right? People are concerned about data and data privacy and consumer protection,” said Wisconsin Sen. Julian Bradley (R), who chairs the Legislative Council Study Committee on the Regulation of Artificial Intelligence in Wisconsin.
Many of these committees will produce reports around the end of the year, and some may even produce legislation — Texas Rep. Giovanni Capriglione (R), who chairs the Select Committee on Artificial Intelligence & Emerging Technologies, is expected to unveil a comprehensive AI regulation bill this fall after releasing an interim report back in May.
Here are some upcoming report deadlines from study committees this year:
November 30 - Alabama Governor’s Task Force on Generative Artificial Intelligence report
December 1 - Georgia Senate Study Committee on Artificial Intelligence (report not required to be issued); Texas Artificial Intelligence Advisory Council report
December 15 - Arkansas AI Working Group report
December 31 - Delaware Artificial Intelligence Commission report; Washington Artificial Intelligence Task Force preliminary report
February 15, 2025 - Colorado Artificial Intelligence Impact Task Force report
Recent Developments
In the News
Cities Move to Regulate AI: The city of San Jose, California, is considering a proposed ordinance that would restrict the sale and use of “algorithmic devices” to set rents for properties. San Francisco passed a similar ordinance earlier this month, and in August the Department of Justice announced a price-fixing lawsuit against a company that offers algorithmic price technology.
OpenAI o1: Last week, to much fanfare, OpenAI announced a new line of AI models last week called OpenAI o1. The new model will come in three flavors, o1-preview, and o1-mini, which are both available now for paid users, and a full o1 model will be released soon. The o1 line (not to be confused with GPT-4o) is built on ChatGPT-4 but uses advanced chain-of-reasoning the enhance the model’s reasoning capabilities, which should improve the model’s ability to tackle complex problems.
Major Policy Action
California: On Thursday, Gov. Newsom (D) signed two bills to combat sexually explicit deepfakes that would require widely used generative AI systems to include provenance disclosures in the content they generate. On Tuesday, Gov. Newsom signed three bills meant to combat misleading political deepfakes and two bills meant to protect actors and other performers from unauthorized use of the digital replication of their image or voice.
Nebraska: On Thursday, an interim committee met to consider AI legislation to combat political deepfakes, using other states as a guide. Some lawmakers expressed reservations about regulating deepfakes due to free speech concerns and argued that some of the harms from deepfakes could be mitigated through existing law.
Texas: On Wednesday, Attorney General Ken Paxton (R) reached a settlement with Dallas-based AI healthcare technology company Pieces Technologies after its products allegedly made a series of false and misleading statements about the accuracy and safety of its products. The original suit was brought under the Deceptive Trade Practices and alleged the company misled consumers by claiming an error rate or “severe hallucination rate” of “<1 per 100,000.”