The Role of Impact Assessments in Combating AI Biases

“Impact assessments” have been a key reporting requirement of many comprehensive AI bills that lawmakers have introduced this year. Impact assessments are one of the safeguards being put in place to help combat disparate treatment and discriminatory outcomes from the use of AI tools in hiring, education, and other settings. Like mandatory disclosures, impact assessments are another early tool policymakers are using to regulate AI as the technology becomes an increasingly common aspect of our daily lives.

Requiring AI deployers and developers to conduct impact assessments (also referred to as bias audits) will mandate periodic reporting of AI tools to ensure they are not inadvertently resulting in disparate or discriminatory effects. Potential bias in AI has been a long-standing concern. Researchers have repeatedly stressed that AI tools are designed by humans and are trained using data that could unknowingly embed biases into these tools. One of the earliest and most prominent examples of bias occurred in facial recognition systems, particularly when applied to darker-skinned people. Inaccuracies in facial recognition can lead to real-world harm. In Texas, a man was arrested and subsequently sexually assaulted while in jail after a Houston store’s facial recognition system falsely identified him as an armed robber. 

While the exact information required in an impact assessment may vary, some common elements we’ve seen in recent legislation include: 

  • Periodic assessments of an AI tool;

  • A statement detailing the intended benefits of an AI tool;

  • A description of the outputs produced by an AI tool and how it is used to make decisions;

  • A description of data that is collected from natural persons;

  • How or if an AI tool is being used in a manner that differs from the developer's stated intent; and

  • A description of any safeguards that have been, or will be, implemented to address any reasonably foreseeable risks from using an AI tool.

A recent example of the implementation of an impact assessment comes from New York City, which put in place rules for AI tools used in hiring. The New York City rules require an independent auditor to conduct an annual bias audit to ensure that AI tools used for hiring do not result in disparate or discriminatory outcomes. The rules only require bias audits to look for disparate or discriminatory outcomes based on sex, race, and intersectionality, leaving out other groups who are vulnerable to discriminatory outcomes, such as individuals with disabilities. But New York City’s early implementation of an impact assessment has not come without scrutiny, including criticism over low compliance rates among employers and the lack of a central location to post and review the results of bias audits, making it harder for job seekers to review the results of any audits that have been performed

State lawmakers hope to learn some lessons from New York City’s experience and have included impact assessment requirements for certain AI tools in legislation. Some lawmakers have taken a broader approach to the categories of people who may be harmed by the use of an AI tool to include not only race and gender, but also religion, age, national origin, sexual orientation, gender identity, marital status, pregnancy, families with children, limited English proficiency, veteran status, disability, and genetic information. 

For example, lawmakers in Rhode Island are debating a bill (RI HB 7251) that would regulate the use of AI in employment, which addresses many of these broader categories. Another trigger is what outcome the AI tool is being used to help determine. Lawmakers in California (CA AB 331) and Oklahoma (OK SB 3835) have introduced legislation that requires assessments to be performed by developers of AI tools whose use results in decisions or judgments with “material” or “legal” effects. 

States are not only considering impact assessment requirements for AI tools used in private sector settings. Virginia lawmakers have introduced a bill (VA SB 487) that would require an impact assessment for any AI tool that is used by a public body. A pending New York bill (NY SB 7543) would require state agencies to conduct impact assessments of AI tools every two years. The impact assessment in the New York bill would also require a summary of the underlying algorithms and training data used to develop the AI tools. Additionally, a bill in Maryland (MD SB 979) would require an impact assessment of AI tools used in schools to ensure the AI does not result in discrimination, disparate impacts, or have a negative impact on the health and safety of students and staff. 

It is unclear how well impact assessments will prevent biased outcomes from AI tools. However, states are taking steps to ensure that AI tools used in the public and private sectors produce fair outcomes, and impact assessments are already a preferred tool for policymakers at this stage in the AI regulatory debate.  

Recent Policy Developments

  • Federal: The Federal Communications Commission (FCC) unanimously voted to recognize that calls made with AI-generated voice are considered “artificial” under the Telephone Consumer Protection Act (TCPA), giving state attorneys general new tools to go after bad actors responsible for these calls. The move comes shortly after deepfake calls were placed by a Texas man impersonating President Joe Biden to thousands of New Hampshire voters urging them not to participate in the state’s primary last month.

  • Alabama: On Monday, Gov. Ivey (R) announced during her State of the State address, her intention to sign an executive order to establish a Task Force on Generative AI. The Task Force is required to submit a report to the governor by November 30, 2024. The report will provide a detailed and accurate description of the current use of GenAI in executive-branch agencies and whether those uses pose any risk as well as policy and administrative recommendations related to the responsible deployment of GenAI in state government. Track which states have established similar study committees and task forces here

  • California: There was a handful of AV industry news this week as AV developers Motional and GM Cruise see funding cuts while Google’s Waymo is seeking regulatory approval in California to operate their robotaxi flee in Los Angeles (expanding beyond San Francisco and Phoenix). We examine the current state of play for AV regulations here.  

  • Pennsylvania: House Republican Leader Bryan Cutler announced the creation of an Artificial Intelligence Opportunity Task Force, composed of five fellow House Republicans. The task force looks to utilize AI to achieve a "thriving economy, affordable living, safe communities, and family-centered opportunities" for children.

  • South Dakota: Lawmakers passed a bill (SD SB 79) that adds computer-generated content to child pornography laws, a measure backed by the attorney general. This is the first sexual deepfake bill to pass this session after nine states passed sexual deepfake laws in the last few years.

Previous
Previous

California’s Focus on AI Development: An Analysis of SB 1047

Next
Next

States Pursue AI for Economic and Workforce Development