California
AI Policy Overview
California has taken a leading role in the regulation of artificial intelligence (AI) policy. State lawmakers have enacted legislation limiting electoral and sexual deepfakes, requiring disclosure of chatbot use, mandating bias audits for state criminal justice agencies utilizing AI tools, and limiting facial recognition use by police. Additionally, state regulators have drafted, but haven’t formally proposed, a comprehensive regulation on automated decision making.
Governor Newsom (D) signed an executive order on September 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI. In March 2024, the state released formal guidelines, pursuant to Gov. Newsom’s 2023 executive order, for state agencies to follow when buying generative AI tools for government use.
In 2023, Governor Newsom signed a bill (CA AB 302) into law mandating a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state. The first annual report is due on January 1, 2025.
The Generative AI Accountability Act of 2024 (CA SB 896) requires a state report examining significant, potentially beneficial uses for the deployment of generative AI tools by the state and a joint risk analysis of potential threats posed by the use of generative AI to California’s critical energy infrastructure. State agencies using generative AI would need to disclose that fact, with clear instructions on how to contact a human employee.
In 2024, California enacted a law (CA AB 2885) establishing a uniform definition for “artificial intelligence” for existing provisions in state law relating to studies of AI and inventories of AI use in state government. The law defines AI to mean an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
Transparency
In 2018, California enacted a law (CA SB 1001) that requires disclosure when using a “bot” to communicate or interact with another person with the intent to mislead about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election. The disclosure must be “clear” and “conspicuous.” Under this law, “bot” is defined as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” The law excludes service providers of online platforms, including web hosts and ISPs.
In 2024, California enacted a law (CA AB 2013), set to go into effect on Jan. 1, 2026, that requires generative AI developers to disclose on their website documentation that provides, among other requirements, a summary of the datasets used in the development and training of the AI technology or service. Notably, the law applies to both original developers and those making “substantial modifications” to a generative AI technology or service.
In 2024, lawmakers enacted the California AI Transparency Act (CA SB 942) which requires a provider of a generative AI system (with over 1 million monthly visitors) to make available an AI detection tool, at no cost to the user, and offer the user an option to include a disclosure that identifies content as AI-generated content and is clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person.
In 2024, California enacted a law (CA AB 2905) requiring a call from an automatic dialing-announcing device to inform the caller if the prerecorded message uses an artificial voice, that is generated or significantly altered using AI.
Bias Prevention
In 2019, California enacted a law (CA SB 36) requiring state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools. Specifically, the law requires each pretrial services agency that uses a pretrial risk assessment tool to validate the tool by January 1, 2021, and on a regular basis thereafter, but no less frequently than once every 3 years, and to make specified information regarding the tool, including validation studies, publicly available.
Deepfakes
Political Deepfakes
California was one of the first states to address deepfake use in electoral campaigns. In 2019, California enacted a law (CA AB 730) that prohibits producing, distributing, publishing, or broadcasting, with actual malice, campaign material that contains (1) a picture or photograph of a person or persons into which the image of a candidate for public office is superimposed or (2) a picture or photograph of a candidate for public office into which the image of another person is superimposed, unless the campaign material contains a specified disclosure. The law includes specific exceptions. The original law was set to sunset in 2023, however, another bill enacted in 2022 (CA AB 972) extended the law's sunset provision until 2027.
In 2024, California enacted three additional laws addressing political deepfakes. The Defending Democracy from Deepfake Deception Act (CA AB 2655) requires a large online platform to block the posting of materially deceptive or created content related to an election within 120 days of an election and up to 60 days after an election and requires the online platform to label certain additional content inauthentic, fake, or false during specified periods before and after an election.
Another law enacted in 2024 focused on transparency (CA AB 2355) requires a committee that creates, originally publishes, or originally distributes a qualified political advertisement to include in the advertisement a specified disclosure that the ad was generated or substantially altered using AI.
Finally, a third 2024 law (CA AB 2839) prohibits an entity from knowingly distributing an advertisement or other election communication with malice within 120 days of an election and 60 days after an election that contains materially deceptive deepfake content if the content is reasonably likely to harm the reputation or electoral prospects of a candidate. When accompanied by a disclaimer, materially deceptive content can be used for parody or satire or a candidate may use a deepfake of themselves. On Oct. 2, 2024, a federal judge granted a preliminary injunction to block the enforcement of AB 2839. The judge found that the law likely violates the First Amendment, writing “Most of AB 2839 acts as a hammer instead of a scalpel, serving as a blunt tool that hinders humorous expression and unconstitutionally stifles the free and unfettered exchange of ideas which is so vital to American democratic debate.”
Sexual Deepfakes
In 2019, California enacted a sexual deepfake law (CA AB 602) that provides a private right of action (but not a criminal violation) for the depicted individual, which is defined as an "individual who appears, as a result of digitization, to be giving a performance they did not actually perform or to be performing in an altered depiction."
In 2024, California enacted additional laws addressing sexual deepfakes. The pair of laws (CA SB 926 & CA SB 896) makes it a crime for a person to intentionally distribute or cause to be distributed certain sexual images, including digital and computer-generated images, without consent in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress.
Another law enacted in 2024 is aimed at social media (CA SB 981) and requires a social media platform to provide a mechanism that is reasonably accessible to users to report digital identity theft to the social media platform. The bill would require immediate removal of a reported instance of sexually explicit digital identity theft from being publicly viewable on the platform if there is a reasonable basis to believe it is sexually explicit digital identity theft.
In 2024, California enacted a pair of laws (CA AB 1831 & CA SB 1381) that amended child pornography laws to include matters that are digitally altered or generated by the use of AI.
Digital Replicas
In 2024, California enacted two laws to protect performers from deepfake digital replicas. The first law (CA AB 1836) creates liability against a person who produces, distributes, or makes available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without specified prior consent. The second law (CA AB 2602) requires that a contract between an individual and any other person for the performance of personal or professional services is unenforceable only as it relates to a new performance by a digital replica of the individual if the provision meets specified conditions relating to the use of a digital replica of the voice or likeness of an individual in lieu of the work of the individual.
Health Care
In 2024, California enacted two laws related to AI use in health care settings. The first law (CA AB 3030) requires a health facility, clinic, physician’s office, or office of a group practice that uses generative AI to generate written or verbal patient communications to ensure that those communications include both a disclaimer that indicates to the patient that the communication was generated by AI and clear instructions permitting a patient to communicate with a human health care provider.
The second law (CA SB 1120) requires a health or disability insurer that uses an AI, algorithm, and other software tools for utilization review or management decisions to comply with requirements pertaining to the approval, modification, or denial of services, inclusive of federal rules and guidance regarding the use of AI, algorithm, or other software tools. Such software must be equitably and fairly applied across the patient population.
Education
In 2024, California enacted two laws related to AI use in education. The first law (CA AB 2876) requires the Instructional Quality Commission to consider incorporating AI literacy content into the mathematics, science, and history-social science curriculum frameworks after 2025 and to consider including AI literacy in its criteria for evaluating instructional materials when the state board next adopts mathematics, science, and history-social science instructional materials.
The second law (CA SB 1288) establishes a working group related to AI in public schools, to provide guidance for local educational agencies and charter schools on the safe use of AI in education, and to develop a model policy regarding the safe and effective use of AI in ways that benefit, and do not negatively impact, pupils and educators.
Facial Recognition
In 2019, California enacted a law (CA AB 1215) that prohibits using facial recognition to analyze images captured by police body cameras.
Regulations
California Privacy Protection Agency (CPPA)
On November 27, 2023, the California Privacy Protection Agency (CPPA) released a draft text of regulations related to businesses' use of “automated decision making technology.” The CPPA emphasizes that this regulatory text is only in draft form and the agency has not yet started the formal rulemaking process on this issue. The draft rules would define “automated decision making technology” (which the agency acronyms ADMT) as “any system, software, or process — including one derived from machine-learning or artificial intelligence — that processes personal information and uses computation to make or execute a decision or facilitate human decision making.”
Importantly, consumers must be given notice that ADMT is being used and that notice must include a specific, plain-language explanation of how the business is using the technology. In addition to the notice requirement, the draft regulations establish a process for consumers to request information from a business about how they are using ADMT.
A primary issue area of these draft regulations is the ability for consumers to opt-out of having their information collected, evaluated, and used by an ADMT. The regulations would establish that consumers have a right to opt-out of a business's use of an ADMT if its use:
produces a legal or similarly significant effect (e.g., used to evaluate a candidate for hire or determine whether or not to give a loan to an applicant);
profiles individuals who are acting in their capacity as employees, independent contractors, job applicants, or students; or
profiles a consumer when they are in a publicly accessible place, such as a shopping mall, restaurant, or park.
The draft regulations also establish situations when consumers will not have a right to opt-out. As currently written, the regulations state that consumers will be unable to opt-out when ADMT is used to:
prevent, detect, or investigate security incidents;
prevent or resist malicious, deceptive, fraudulent, or illegal actions directed at a business;
protect the life and safety of consumers; or
when use of ADMT is necessary to provide the good or service requested.
On Dec. 8, 2023, the CPPA held a public board meeting where board members criticized the draft regulatory text as so broad that it could cover essentially any technology. As a result, the CPPA Board directed staff to prepare revised drafts that take into account the feedback from board members.
On Mar. 8, 2024, the CPPA Board voted to take a step toward formal rulemaking on regulations for automated decision-making technology. The proposed update clarifies the contents of a risk assessment, amends considerations for impacts of consumer privacy, and considers safeguards. A final vote on whether to proceed with final rulemaking may not occur until the summer of 2024, and rules may not be finalized until 2025.
California Civil Rights Department (CRD)
On May 17, 2024, the CRD’s Civil Rights Council issued a notice of proposed rulemaking and new proposed modifications to California’s employment discrimination regulations. This follows draft modifications to its antidiscrimination law in March 2022. “Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems” seeks to restrict how employers can use AI to screen workers and job applicants. The proposed regulations would affirm that the state’s anti-discrimination laws and regulations apply to potential discrimination caused by the use of “automated systems” and make it clear that the use of third-party service providers is included in the regulation. Stakeholders had until July 18, 2024, to submit written comments on the proposed CRD regulations.
Legislative & Regulatory History
2024 - California enacted CA SB 1288, which establishes a working group to provide guidance on AI in public schools.
2024 - California enacted CA AB 2876, which requires consideration of AI literacy in the K-12 curriculum.
2024 - California enacted CA SB 896, which requires state agencies using generative AI to disclose that fact, with clear instructions on how to contact a human employee.
2024 - California enacted CA SB 1120, which requires health insurers that utilize AI in decision making to follow additional requirements.
2024 - California enacted CA AB 3030, which requires communications to a patient by a health care provider that uses generative AI to disclose the AI use and give the patient the option to communicate directly with a human.
2024 - California enacted CA AB 2905, which requires a call from an automatic dialing-announcing device to inform the caller if the prerecorded message uses an artificial voice generated using AI.
2024 - California enacted CA SB 942, which requires generative AI providers to make AI detection tools available to users.
2024 - California enacted CA AB 1831 & CA SB 1381, which amends child pornography laws to include matters that is digitally altered or generated by the use of AI.
2024 - California enacted CA AB 2885, which establishes a uniform definition for “artificial intelligence” for existing provisions in state law relating to studies of AI and inventories of AI use in state government.
2024 - California enacted CA AB 2013, which requires the developer of a generative AI system or service to publish documentation regarding the data used to train the AI system.
2024 - California enacted CA SB 981, which requires a social media platform to provide a mechanism to report digital identity theft and requires immediate removal of a reported instance of sexually explicit digital identity theft.
2024 - California enacted CA SB 926, which criminalizes the intentional distribution of certain sexual images, including digital and computer-generated images, without consent.
2024 - California enacted CA AB 2839, which prohibits an entity from knowingly distributing an election communication with malice during specified periods before and after an election that contains materially deceptive deepfake content, unless used for parody or satire and accompanied by a disclaimer.
2024 - California enacted CA AB 2355, which requires a committee that distributes a qualified political advertisement to include a disclosure that the ad was generated or substantially altered using AI.
2024 - California enacted CA AB 2655, which requires large online platforms to block the posting of materially deceptive content related to an election and to label such content during specified periods before and after an election.
2024 - California enacted CA AB 2602, which renders certain contracts unenforceable as related to a new performance by a digital replica of an individual.
2024 - California enacted CA AB 1836, which limits digital replicas of a deceased personality’s voice or likeness without specified prior consent.
2024 - California issued formal guidelines for state agencies to follow when buying generative AI tools for government use.
2023 - Gov. Newsom issued Executive Order N-12-23 on Sep. 6, 2023, directing state agencies to study the potential uses and risks of generative AI as well as engaging with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI.
2023 - California enacted CA AB 302, which mandates a thorough inventory of all “high-risk” AI systems “that have been proposed for use, development, or procurement by, or are being used, developed, or procured” by the state.
2022 - California enacted CA AB 972, which extended the sunset provision of CA AB 730 until 2027, which requires disclosure of deepfake use in campaign material.
2019 - California enacted CA AB 1215, which prohibited the use of facial recognition to analyze images captured by police body cameras.
2019 - California enacted CA AB 730, which required disclosure of deepfake use in campaign material.
2019 - California enacted CA SB 36, which requires state criminal justice agencies to evaluate potential biases in AI-powered pretrial tools.
2018 - California enacted CA SB 1001, which requires disclosure when using a “bot” to communicate or interact with another person.