State Lawmakers Propose Regulating Chatbots

Key highlights this week:

  • After state lawmakers introduced nearly 700 AI-related bills last year, we’re already tracking 404 bills in the 2025 legislative session. 

  • Chatbots are coming under increased scrutiny, leading lawmakers in several states to propose disclosure requirements. 

  • The Trump administration announced a $500 billion joint venture in AI, while revoking the Biden administration’s Executive Order on AI regulation.

  • Nebraska is the latest state to propose a comprehensive AI regulation bill, with provisions modeled after Colorado’s law. 

Chatbots have become pervasive in customer service, health care, banking, education, marketing, and sometimes, purely for entertainment purposes. But following a high-profile lawsuit stemming from a teen who committed suicide after a conversation with a chatbot, the technology has come under further scrutiny. State lawmakers are already introducing bills to regulate their use, which could have an impact on a wide-ranging number of businesses.

Chatbots have become a useful way to filter and direct consumer queries, freeing up personnel to take on more complicated tasks. But some have given false or misleading information or even directed people to perform illegal acts, such as a chatbot employed by the government of New York City. That has prompted state lawmakers to propose some guardrails on the technology to fully inform consumers. 

California was the first state to regulate chatbots, passing a measure (CA SB 1001) back in 2019. The state makes it unlawful for anyone to use a bot to interact with another person to mislead them into thinking they are interacting with a real person, in an effort to to encourage a sale or a vote in an election. To avoid liability, companies must clearly and conspicuously disclose to the other person that the interaction is with a bot. The law applies if the consumer is located in California, even if the bot is not. The law also exempts “service providers of online platforms, including, but not limited to, Web hosting and Internet service providers.”

The next state chatbot regulation law wasn’t until last year, when Utah lawmakers passed their Artificial Intelligence Policy Act (UT SB 149). The law requires two different kinds of disclosures for consumer interactions with generative AI. Individuals who are in a profession regulated by the Utah Department of Commerce are required to proactively disclose when the interaction is with generative AI when they are providing regulated services. These occupations include architects, certified public accounting firms, and a variety of medical professionals such as psychologists, nurses, and substance abuse counselors. They can only provide those regulated services using AI if the regulations of their profession allow. For other generative AI interactions involving activities regulated by the Division of Consumer Protection, disclosure is only required if the consumer asks. These could include activities like consumer sales, professional fundraising, ride-sharing services, or lawyer referrals. 

Colorado also passed an AI bill last year (CO SB 205) that goes into effect in 2026. Although lawmakers expect to tweak the measure before it is implemented, as currently written, it requires disclosure to a consumer that the interaction is with an artificial intelligence system. 

This year, more state lawmakers are taking up legislation to regulate chatbots. Bills have been filed in Hawaii (HI HB 639 and SB 640), Massachusetts (MA SD 2293, SD 2592), New York (NY AB 768), and Virginia (VA SB 1161) that would require disclosures by chatbots. Bills in Massachusetts (MA HD 1222) Pennsylvania (PA HB 95) would require disclosure to consumers for any content generated by generative AI, including text. Another bill draft in Massachusetts  (MA HB 3750) would require disclosure from healthcare providers that use generative AI to interact with patients. 

New York Assemblyman Clyde Vanel (D) has introduced a measure (NY AB 222) to impose liability for misleading, incorrect, contradictory or harmful information to a user by a chatbot that results in financial loss or other demonstrable harm. He told Pluribus News he was motivated to take action after high-profile cases where minors were allegedly directed towards self-harm and sexualized behavior by chatbots. 

Additionally, chatbots are facing increasing scrutiny from regulators. Texas Attorney General Ken Paxton (R) has launched a probe into Character.ai regarding their privacy and safety practices for minors. Last fall, New York Attorney General Letitia James (D)  issued a consumer alert not to rely on inaccurate voting information from chatbots. Earlier in the year, she sent a letter to Meta to inquire about their AI chatbot fabricating allegations of sexual harassment against state lawmakers. Attorneys General in California, Massachusetts, and Oregon have issued guidance warning that misrepresentations by chatbots can run afoul of state consumer protection laws. 

Even businesses that don’t consider themselves artificial intelligence companies may use chatbots that could be subject to regulation. Chatbots have the capability to greatly improve efficiency for businesses and governments, but like any technology, it comes with some risks. Disclosure laws are low-hanging fruit that offer lawmakers an opportunity to set some guardrails on AI technology without requiring onerous registration or reporting requirements. 

Recent Developments

In the News

  • Stargate: On Tuesday, the White House announced a $500 million partnership between OpenAI, SoftBank, and Oracle to invest in artificial intelligence infrastructure. The joint venture, named Stargate, is designed to build data centers and create more than 100,000 jobs.

Major Policy Action 

  • Federal: On his first full day in office, President Trump revoked a 2023 Executive Order from President Biden that requires large AI developers to share the results of safety tests with the government before being released. On Thursday, Trump issued an Executive Order to “develop AI systems that are free from ideological bias or engineered social agenda.”

  • Connecticut: Gov. Ned Lamont (D) plans to propose tax cuts for AI start ups in his State of the State address next month. He threatened to veto an AI regulation bill last year over concerns it could make the state hostile to the industry. 

  • New Jersey: Earlier this month, Attorney General Matthew J. Platkin (D) announced a new Civil Rights and Technology Initiative to address the risks of discrimination and bias-based harassment from the use of artificial intelligence. His office also released a guidance document to clarify that current state law “prohibits all forms of discrimination, irrespective of whether discriminatory conduct is facilitated by automated decision-making tools or driven by purely human practices.”

  • New York: In her State of the State Address earlier this month, Gov. Kathy Hochul (D) announced a proposal to require employers who engage in mass layoffs to disclose whether AI automation played a role. The proposal would give the state data to evaluate the impact of AI on the job market. 

Notable Proposals 

  • Massachusetts: Rep. David Rogers (D) has filed a measure (MA HD 4192) that would prohibit a search engine from automatically returning results using artificial intelligence unless the user provides affirmative consent. The proposal would also direct a study on the environmental impacts of artificial intelligence. 

  • Nebraska: Senator Eliot Bostar (NP) has introduced the Artificial Intelligence Consumer Protection Act (NE LB 642) which requires certain obligations of developers and deployers to mitigate risk modeled after provisions passed in Colorado last year. The measure also requires disclosure to a consumer upon an interaction with an artificial intelligence system and is only enforceable by the state Attorney General.

  • New York: Budget bills proposed this week (NY AB 3008/SB 3008) include a provision to require disclosure of algorithmically set prices for goods and services. They also include a measure to require a companion chatbot to contain protocols for addressing self-harm expressed by a user.

  • Washington: A group of Senators introduced a bill this week (WA SB 5422) that allows public employees to bargain over the decision to adopt or modify AI technology if the adoption or modification affects employees' wages, hours, or terms and conditions of employment. The bill heads to the Senate Labor & Commerce Committee. 

Previous
Previous

Lawmakers Propose Right of Publicity Protections for Digital Replicas

Next
Next

The New Wave of Comprehensive Consumer Protection AI Bills