Understanding California’s Proposed AI Rules: Notifications and Opt-Outs
On November 27, 2023, the California Privacy Protection Agency (CPPA) released a draft text of regulations related to businesses' use of “automated decision making technology.” The draft regulations, if adopted, would establish a framework for how businesses can implement automated decision making technology that uses personal information to make a decision or acts as a replacement for human decision making. A key issue that the CPPA draft regulations address is when a consumer can and cannot opt-out of a business’ use of this technology. The CPPA emphasizes that this regulatory text is only in draft form and the agency has not yet started the formal rulemaking process on this issue. However, the regulations will serve as a discussion point for CPPA’s next board meeting scheduled for December 8, 2023.
As we’ve noted, it will be key to properly define what policymakers are seeking to regulate when it comes to “AI.” Here, the CPPA defines “automated decision making technology” (which the agency acronyms ADMT) as “any system, software, or process — including one derived from machine-learning or artificial intelligence — that processes personal information and uses computation to make or execute a decision or facilitate human decision making.”
A primary issue area of these draft regulations is the ability for consumers to opt-out of having their information collected, evaluated, and used by an ADMT. The regulations would establish that consumers have a right to opt-out of a business's use of an ADMT if its use:
produces a legal or similarly significant effect (e.g., used to evaluate a candidate for hire or determine whether or not to give a loan to an applicant);
profiles individuals who are acting in their capacity as employees, independent contractors, job applicants, or students; or
profiles a consumer when they are in a publicly accessible place, such as a shopping mall, restaurant, or park.
The draft regulations also establish situations when consumers will not have a right to opt-out. As currently written, the regulations state that consumers will be unable to opt-out when ADMT is used to:
prevent, detect, or investigate security incidents;
prevent or resist malicious, deceptive, fraudulent, or illegal actions directed at a business;
protect the life and safety of consumers; or
when use of ADMT is necessary to provide the good or service requested.
While the regulations focus primarily on establishing opt-out provisions for consumers, there are other provisions worth noting. Importantly, consumers must be given notice that ADMT is being used and that notice must include a specific, plain-language explanation of how the business is using the technology. In addition to the notice requirement, the draft regulations establish a process for consumers to request information from a business about how they are using ADMT. Finally, draft regulations also contain age-specific provisions for information collected from minors, such as requiring opt-in consent from parents for behavioral advertising for individuals under the age of 13.
This regulatory approach is in contrast to legislation introduced this year in California (CA AB 331), which would mandate annual impact assessments and require notification of any person subject to a “consequential” decision by an AI tool. The legislation, which Assemblymember Rebecca Bauer-Kahan (D) intends to reintroduce next year, goes a step further than the proposed regulation by allowing private residents to sue developers whose AI tools contribute to “algorithmic discrimination.” Much like recent privacy laws, these proposed AI bills and regulations focus on providing consumers with notice and opt-out rights, although these regulations are much more narrow as to what kind of uses consumers can opt out of.
This draft regulation is a starting point for where CPPA looks to take AI regulation in California. Many aspects will require further clarification and additional questions addressed, and that process is likely to start at the CPPA meeting next week.
Recent Policy Developments
New York: Gov. Hochul vetoed a bill (NY AB 4969) that would have established a temporary state commission to study and investigate how to regulate artificial intelligence, robotics, and automation. The decision was part of a sweeping set of vetoes the governor issued on proposed study commissions.
Wyoming: Last Tuesday, the Select Committee on Blockchain, Financial Technology and Digital Innovation Technology advanced a draft bill for next year’s legislative session that would provide for the civil and criminal prosecution of those who maliciously produce “synthetic media.” Lawmakers admit flaws with the bill, arguing they’re trying to “balance rights of speech and at the same time protect individuals from abuses through synthetic media or misleading information.”
California: Last Tuesday, Gov. Newsom’s administration released a report on outlining the state’s opportunities to use “Generative Artificial Intelligence (GenAI)” while highlighting potential harms. This is the first of several expected reports and deliverables required by the executive order the governor issued two months ago. This is in addition to the draft regulations on automated decision making technology, discussed above, that the CPPA released on Monday.
Oregon: On Wednesday, Gov. Kotek issued an executive order establishing an advisory council to guide the role of AI in state government. The council is tasked with providing a recommended action plan framework no later than six months from the date of its first convening and a final recommended action plan no later than 12 months from its first convening.
US Chamber: On Wednesday, more than 60 state and local chambers published an open letter, organized by the US Chamber, calling on state policymakers “to study whether legal gaps exist before pushing for new regulatory frameworks.” Unsurprisingly, the letter asked states to avoid a “patchwork” approach to AI regulation.
New Jersey: Legislation to ban deepfake pornography (NJ SB 3707) has picked up steam after a scandal at a local high school where sexual deepfake images were made of students. Senator Jon Bramnick (R) has been added as a sponsor to a bill that would add deepfake images to current revenge porn statutes.