States Ban AI in Setting Rent

Key highlights this week:

  • After state lawmakers introduced nearly 700 AI-related bills last year, we’re already tracking 238 bills as states gavel in their 2025 legislative sessions. 

  • Two of last year’s most watched AI bills that failed to become law will return in some form this year as California’s Sen. Wiener (D) vowed to introduce another AI safety bill and Connecticut’s Sen. Maroney (D) has already introduced a placeholder for this year’s version of his SB 2 comprehensive consumer protection bill. 

  • Following the lead of Colorado and Connecticut, lawmakers in several new states have introduced comprehensive anti-bias bills mandating impact assessments, including Texas, Virginia, New York, and Hawaii

  • The Oregon AG provides businesses official guidance to businesses using AI to avoid violating current consumer protection, privacy, and anti-discrimination laws.

  • Mississippi Gov. Reeves (R) signed an executive order to inventory AI uses in state government and develop policy recommendations for the responsible use of AI by agencies.

Happy New Year everyone. And since I’ve already declared 2025 the year of AI policy, we expect to have much to discuss. We’re closely watching the introductions of anti-bias legislation inspired by last year’s proposed CT SB 2 and enacted CO SB 205, but they’ll only be a fraction of the bills debating in state capitol hearing rooms this year. Those bills attempt to address major consumer interactions with AI by laying out broad fields like employment, financial and legal services, housing, health care, and insurance. But narrower focused bills that target these industries individually or even a subset of known issues with AI and those sectors, are also popular.

One specific area of consumer concern around AI is housing. The White House put out a report last month titled “The Cost of Anticompetitive Pricing Algorithms in Rental Housing” and we’ve watched lawmakers introduce a growing number of bills specific to housing and AI. They allege that  AI tools to price rent for tenants create potential price coordination when used by a high percentage of landlords in an area. To illustrate the types of legislation we’ve seen, let’s use the two states that are starting the second year of their legislative session this month: Virginia and New Jersey. 

Last week, lawmakers in Virginia introduced a bill (VA HB 2047) that would prohibit a landlord from using an “algorithmic pricing device” to “restrain” the rental housing market in ways that constitute an unfair method of competition. The legislation would also require landlords who use “algorithmic pricing devices” to disclose that use to potential tenants. Another recently introduced bill in Virginia (VA HB 1870) is aimed at the developer of such a tool and would outright prohibit the sale of such algorithmic pricing devices for the purpose of recommending the price of rent to landlords. 

In New Jersey, a straightforward bill (NJ SB 3657) introduced by lawmakers last September would make it unlawful to use an “algorithmic system” to influence the price and supply of residential rental units. Another bill (NJ AB 4872) would prohibit any coordination between property owners from analyzing nonpublic information through an algorithm that recommends rent prices or lease renewals. 

And since housing is often a local issue, cities are getting in on the action as well. Philadelphia and San Francisco enacted their own ordinances to ban algorithmic rental pricing software. The San Francisco measure defines an “algorithmic device” as a software program that uses algorithms to analyze nonpublic competitor rental data for the purposes of providing landlords recommendations on what rent to charge for a vacant unit.

Recent news will keep this issue front and center with lawmakers and the U.S. Justice Department and ten states announced this week a lawsuit accusing six landlords operating 1.3 million units across 43 states of coordinating by using both an algorithm to help set rents.

This is one avenue policymakers will take to regulate AI: wait until an issue develops and then aggressively target it with legislation. And housing is only one example. 


Recent Developments

In the News

  • The Shift to “Thinking” Models: This time last year, the expectations were for the next generation of pre-trained AI models would be released by year-end and provide a step change in performance. Instead, we got significant performance advances within smaller, cheaper models and a shift to “thinking” models that tweak current-generation pre-trained models to provide better reasoning and the ability to check their work. This has led to PhD level responses in some areas and new highs in the benchmarks for OpenAI’s “o” level models “o1” and the soon-to-be-released “o3” (currently under safety testing) reasoning models. 

Major Policy Action 

  • New York: As the year wound down, Gov. Hochul signed two AI-related bills into law on government use of AI and digital replicas in the fashion industry. The first law (NY AB 9430/SB 7543) limits the use of AI tools by state agencies by prohibiting the use or procurement of automated decision-making systems for functions related to public assistance, that impact civil liberties, safety, or welfare, or affect provided rights. For other uses of automated decision-making systems, government agencies will need to conduct reviews and publish impact assessments. The second law was part of a broader bill on the fashion industry (NY AB 5631/SB 9832), which included a digital replica under the definition of “modeling services” and required written consent for the creation or use of a fashion model’s digital replica. 

  • Oregon: Late last month, Attorney General Ellen Rosenblum (D) issued guidance to businesses using AI to avoid violating current consumer protection, privacy, and anti-discrimination laws. The guidance warns that misrepresenting the functionality of an AI system or employing a chatbot that falsely represents that it is human could constitute violations under the Unlawful Trade Practices Act and that developers handling data to train models could be subject to the state’s Consumer Privacy Act.  

  • Mississippi: On Wednesday, Gov. Tate Reeves (R) signed an executive order to direct an inventory of AI uses in state government and develop policy recommendations for the responsible use of AI by agencies. The order also directs the Department of Information Technology Services to engage with stakeholders to develop recommendations on best practices, uses, and strategies.

Notable Proposals 

  • California: Sen. Scott Wiener (D) has vowed to return with another AI bill after Gov. Newsom vetoed his measure last year, and he has already filed a draft bill (CA SB 53), although there is no substantive text yet. He intends to “introduce a full proposal in the next month or two” but could take his lead from the governor’s AI working group, which is expected to release a report this summer.

  • Connecticut: Sen. James Maroney (D) has also returned with a comprehensive AI bill (CT SB 2), although the substantive text has yet to be unveiled. Sen. Maroney’s bill last year passed the Senate but stalled in the House due to a lack of support by Gov. Lamont, but he asserts “we’re in a different place from last year” and indicates his “goal is to stay as close as possible” to Colorado’s landmark enacted law.

  • New York: Assemblyman Alex Bores (D) introduced the New York Artificial Intelligence Consumer Protection Act (NY AB 768), a comprehensive regulatory measure, with a competing bill in the Senate (NY SB 1169) that would make the developer or deployer legally responsible for the quality and accuracy of all consequential decisions made. Bores has also promised to introduce a package of bills aimed at using the global provenance standard known as C2PA to identify deepfake content.

  • Virginia: Lawmakers have introduced a number of artificial intelligence bills in the first week of the session, including a comprehensive regulatory bill (VA HB 2094) imposing obligations on developers and deployers, a bill (VA HB 2250) creating the Artificial Intelligence Training Data Transparency Act requiring disclosures on datasets used to train AI models, a bill (VA HB 2121) requiring AI models to apply provenance data to identify synthetic content, a bill (VA HB 2021) that prohibits virtual assistants on devices from providing voice purchasing unless the user has consented, and a digital replica bill (VA HB 2462), among others.

  • Hawaii: Three lawmakers have filed a bill (HI SB 59) this week that would prohibit algorithmic discrimination and require disclosures to individuals regarding consequential decisions made by AI. The measure would only apply to an entity that possesses or controls personal information of at least 25,000 residents, has $15 million in annualized gross receipts for the last three years, is a data broker, or is a service provider, from algorithmic discrimination.

  • Washington: A bill (WA HB 1205) introduced this week would make it unlawful to knowingly distribute a forged digital likeness of another as a genuine visual representation or audio recording with the intent to defraud, harass, threaten, intimidate, or humiliate another or for another unlawful purpose. Using deepfakes to defraud consumers was raised as a concern in a December report by the AI Task Force by the Attorney General’s Office.

Previous
Previous

The New Wave of Comprehensive Consumer Protection AI Bills

Next
Next

California's Proposed Rules on Automated Decision-Making Technology