California's Proposed Rules on Automated Decision-Making Technology

Key highlights this week:

  • We’re currently tracking 692 bills in 45 states related to AI this year, 115 of which have been enacted into law, and another 46 bills have already been prefiled for 2025.

  • While legislation is the most popular route for addressing AI policy, state agencies are also regulating the issue, including a proposal from California that is the focus of this week’s deep dive. 

  • X sues to block a California deepfake law and a Georgia study committee released a nearly-200 page report on AI policy recommendations. 

  • Novel AI bills are already being prefiled for the 2025 legislative sessions in California, Missouri, and Texas

California has been out front among the states regulating the tech industry. The Golden State was the first to enact a comprehensive data privacy law and establish an agency to protect consumer data. The California Privacy Protection Agency (CPPA) has been slow to promulgate regulations, last month initiating formal rulemaking to update privacy laws and create new regulations on “automated decision-making technology” (ADMT) — specifically aimed at the emergence of artificial intelligence. These rules could have a broad scope with wide-ranging obligations for businesses that use the emerging technology to facilitate decisions. 

At its November 8, 2024, meeting, the CPPA board voted 4-1 to initiate formal rulemaking on a package of draft regulations that would (a) update existing privacy regulations; (b) establish requirements for cybersecurity audits; (c) establish requirements for risk assessments; and (d) give consumers rights over business use of ADMT with a new section of rules we will cover here. 

The regulations create a new section to regulate ADMT, defined as “any technology that processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially facilitate human decisionmaking” (emphasis added). This definition of ADMT is even broader than similar rules in the European Union that apply only to decisions made using automated processing without human intervention. Certain specified uses are exempted from CPPA’s proposed definition, including technologies for anti-virus software, robocall-filtering, spreadsheets, and spellcheck.

The rules apply to ADMT when used by businesses:

  • To make a “significant decision” regarding a consumer using information that is not already exempted by California’s privacy law;

  • For extensive profiling of a consumer in work and educational contexts, in public, or for behavioral advertising;

  • For training uses that include processing personal information to train models capable of being used (a) for a significant decision, (b) facial recognition technology, (c) for physical or biological identification or profiling (analyzing faces to infer an emotional state), or (d) for generating a deepfake.  

A “significant decision” would be a decision that results in access to, or the provision or denial of, financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice (e.g., bail bonds), employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services  (including groceries, medicine, hygiene products, or fuel).

Businesses that use physical or biological identification or profiling for a significant decision concerning a consumer, or for extensive profiling of a consumer, will be required to conduct an evaluation to ensure that the system works as intended and does not discriminate based upon protected classes. 

Businesses would be required to give notice to consumers before using ADMT with certain disclosures about the purpose of the technology and how the decision used the technology. Businesses would also have to disclose how the consumer can access the ADMT, although an exception is made if the technology is used for training purposes.

Consumers would have a right to opt out of ADMT uses. The rules provide exceptions if the business provides the ability to appeal a significant decision to a qualified human reviewer with the authority to overturn the decision or if ADMT is used only for security, fraud prevention, and safety. There are also exceptions if the business has evaluated the ADMT to make sure it works as intended with safeguards against discrimination, and it is used:

  • For assessing a person’s performance to make admission, acceptance, and hiring decisions at work or in an educational program;

  • For allocation/assignment of work and compensation decisions; or

  • For work or educational profiling.

These opt-out exceptions do not apply to ADMT used for behavioral advertising or for the use of training the ADMT. At least two designated methods must be offered to the consumer to opt-out, with the agency laying out some examples of acceptable processes.

A business that has used ADMT to make a significant adverse decision against a consumer must provide the consumer with notice of: 

  • The fact ADMT was used in the decision;

  • That fact the business is prohibited from retaliating against the consumer for exercising rights under the privacy law;

  • Their right to access the ADMT and how to access the ADMT;

  • How to appeal a decision to a human reviewer if the business provides that option

The agency estimates direct costs for business totaling $835 million in the first year from the ADMT provisions alone, and $3.5 billion for the entire package of rules, numbers the California Chamber of Commerce argues underestimate the true cost. These rules reflect a multi-year process from the agency that has drawn criticism and litigation for delays in rulemaking. Notice of the rulemaking was published on November 22, 2024, and the agency will solicit comments until the public hearing on January 14, 2025.  

While these regulations are sure to elicit significant pushback, California continues to lead the charge in balancing technological innovation with robust consumer protections, setting a precedent that will influence AI governance in the absence of federal action. 


Recent Developments

In the News

  • X sues to block California deepfake law: Last month, X (formerly Twitter) filed a lawsuit to block California's AB 2655, a law aimed at curbing AI-generated deceptive election content on social media, claiming it impinges on free speech.

Major Policy Action 

  • Georgia: On Tuesday, the Senate Study Committee on Artificial Intelligence released an 185-page report with 22 policy recommendations on the emerging technology. The committee recommends adopting a statewide data privacy law, updating deepfake laws to cover political ads, and including transparency and labeling in deepfake content. The committee also encourages disclosure by businesses in how AI is used in products and services, disclosures to consumers for AI interactions,  a voluntary certification program, and keeping humans in the framework for AI systems, particularly for sensitive decisions.

  • Utah: The Division of Professional Licensing announced its first regulatory mitigation agreement under the new artificial intelligence law, signing a deal with ElizaChat, an app to help teenagers improve their mental health in school. Under the agreement, the company would have to implement an internal safety protocol for escalating serious cases for review, with a 30-day right to correct instances where the app veers into mental health therapy, a licensed practice.


Notable Proposals 

  • California: The General Assembly reconvened this week, with Sen. Angelique Ashby (D) introducing a measure (CA SB 11) that would require a consumer warning that misuse of deepfake-producing technology may result in civil or criminal liability for the user. The bill also clarifies the use of deepfake content with the intent to impersonate another would be deemed to be a false personation, clarifies a law passed earlier this year protecting voice and likeness from synthetic depiction, and requires a review of the impact of synthetic content on court proceedings. 

  • Missouri: Sen. Joe Nicola (R) has prefiled a bill (MO SB 85) that would prohibit county assessors from using AI to determine the true value in money of any real or personal property. Property tax assessments have been a point of contention after Jackson County levied significant hikes in 2023, causing the state to sue to roll back the increases.

  • Texas: Lawmakers have begun prefiling legislation for 2025, including a package of bills aimed at AI. One bill (TX HB 421) would require a deepfake generator to verify the age of the user before creating explicit deepfake material, creating a private cause of action to enforce the measure.  Another bill (TX HB 1121) would create liability for a software or app developer that fails to take reasonable precautions against the creation, adaptation, or modification of nonconsensual sexual deepfake content.

Next
Next

State AI Policy: 2025 Preview