What States Have Learned from NYC’s AI Hiring Law

Key highlights this week:

  • We’re currently tracking 649 bills in 45 states (plus another 116) congressional bills) related to AI this year, 54 of which have been enacted into law. 

  • Lawmakers in California continue to amend and advance the most-watched AI bill in the country in SB 1047. 

  • The governor and lawmakers in Colorado are already discussing amendments to their recently enacted landmark AI law. 

Helping businesses sort through the thousands of job applicants they receive for job openings has been an early use case for artificial intelligence. However, widespread use of such tools has attracted the scrutiny of policymakers, who seek to protect the privacy of job applicants and combat unintentional biases these tools could promote. We see a similar set of policy levers used in proposed AI hiring laws as we’ve found in other use-level regulations of AI: disclosures and impact assessments. But as policymakers in New York City learned, getting the scope right can be a challenge. 

The use of AI tools for hiring decisions is one of the first aspects of AI that policymakers looked to address. Back in 2019, Illinois lawmakers enacted the Artificial Intelligence Video Interview Act (IL HB 2557), which requires employers who use AI to analyze video of job interviews to provide notice, explain to the applicant how the AI tool works, and obtain consent from the job applicant before the AI tool can be used to make an evaluation — creating an opt-in requirement. Maryland followed Illinois’ lead and enacted a similar bill (MD HB 1202) in 2020. The Maryland law also prohibits an employer from using certain facial recognition services during an applicant's interview for employment unless the applicant consents. Notably, neither of these laws contain an explicit cause of action to enforce violations. 

These early laws were narrowly focused on AI tools that use video footage to evaluate job applicants’ facial expressions, body language, word choice, and tone of voice. But in 2021, New York City took aim at AI hiring tools beyond video evaluations when the City Council adopted Local Law 144. The law requires that any employer or employment agency that uses an AI tool for hiring or promotions must have an independent auditor conduct an annual bias audit of the AI tool and make a summary of the results available on their website. Additionally, a summary of the audit must be made available on the employer’s website and candidates must be given notice that an AI tool is being used during the hiring or promotion process. 

It took regulators another year and a half to finalize rules to implement Local Law 144, but the law finally went into effect on July 5, 2023. And since then, you probably haven’t heard much about it. That’s because the law limits the application of the requirements to tools that “substantially assist or replace” human decision-making. Most companies that use AI tools in the hiring process can say that a human remains involved in the hiring process at some step, so the tool does not fully “replace” a human and the phrase “substantially assist” leaves much to interpretation. After the first six months, the city’s Department of Consumer and Worker Protection, tasked with enforcing the law, said they hadn’t received a single complaint for violation of the law even though an outside study indicated that few companies have published the required audit reports on their websites. 

But despite the shortcomings of New York City’s AI hiring law, the use of AI tools in the hiring process is still a top interest for policymakers. And state lawmakers are taking the lessons learned from New York City’s experience to ensure that any bills they pass are not ignored. 

We’re currently watching 31 bills in 8 states that we’ve classified as AI employment bills. Most of these bills follow the template that New York City’s law provides: requiring disclosures and impact assessments (or bias audits) for businesses using AI tools in the hiring process. Lawmakers in Albany introduced legislation (NY SB 7623) that follows this path, requiring employers with 100 or more employees who use an “automated employment decision tool” for hiring decisions to conduct an impact assessment and mandating notice to job candidates. The bill goes a bit further by granting a right for employees to access and correct the data and prohibiting retaliation against the candidates or employees.

Illinois is following through on its trailblazing status in the AI in employment policy space by advancing a broader AI hiring bill this session. Lawmakers in both legislative chambers passed legislation (IL HB 3773) that would prohibit employers that use predictive data analytics in their employment decisions from considering the applicant's biographical information, such as race or zip code, to reject an applicant in specified contexts. It will soon be sent to Gov. J.B. Pritzker, and if signed, would take effect January 1, 2026.  

A bill (PA HB 1729) pending in Pennsylvania copies a unique aspect of the Maryland and Illinois video interview laws by adding an opt-in requirement for employers that seek to use AI tools in the hiring process. After notifying an applicant of their use of the AI tool, and explaining how the tool works, employers would need to further receive an applicant’s consent to the use of the AI tool. 

Despite the setbacks in New York City’s law, policymakers want to protect job candidates from any unintended harm that the growing use of AI tools in the hiring process might cause. And depending on how these AI tools are defined, this is an issue that could end up affecting a large percentage of businesses. 



Recent Developments

In the News

  • Another AI Company: After leaving OpenAI last month, Ilya Sutskever announced the formation of a new company called Safe Superintelligence, which says will aim to produce “superintelligence” — AI that is smarter than humans — but in a safe way. The co-founders of the new company are AI veterans from Apple and OpenAI. Sutskever focused on AI safety at OpenAI and led the “superalignment” team, which has since been disbanded. 

Major Policy Action 

  • National: A bipartisan group of 44 state and territory attorneys general sent a letter to U.S. House leadership endorsing the Child Exploitation and Artificial Intelligence Expert Commission Act of 2024. The proposed legislation would create a commission to “investigate and make recommendations on solutions to improve the ability of a law enforcement agency to prevent, detect, and prosecute child exploitation crimes committed using artificial intelligence.”

  • California: On Wednesday, the Assembly Privacy & Consumer Protection Committee amended and advanced the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (CA SB 1047). The changes would allow the proposed Frontier Model Division to raise the threshold for which models are regulated, amend the scope for models that have been fine-tuned, and clarify obligations for the initial developer of the model.  The bill heads to the Judiciary Committee and is expected to be voted on in the Assembly in August

  • Colorado: Last week, Gov. Jared Polis, Attorney General Phil Weiser, and Senate Majority Leader Robert Rodriguez drafted a letter acknowledging the need for changes to the landmark AI bill (CO SB 205) signed into law last month. The policymakers plan to focus on narrowing regulation to high-risk systems, shifting enforcement from a proactive disclosure regime to enforcement after violations, and clarifying the consumer right to appeal.

  • Delaware: The Senate Judiciary Committee advanced a bill (DE HB 353) Tuesday that provides civil and criminal remedies for wrongful disclosure of sexual deepfakes. The bill is eligible for a floor vote in the Senate, after having already passed the House on a 35-6 vote back in May. 

Notable Proposals

  • New Jersey: Last Thursday, Assemblyman Chris Tully (D) introduced a measure (NJ AB 4558) to establish the Next New Jersey Program to attract new investment in the artificial intelligence industry.  Under the bill, the state could offer tax credits for the product of 0.1 percent of total capital investment multiplied by the number of new full-time jobs; 25 percent of total capital investment; or $250 million, whichever is less.

Previous
Previous

Most States Have Enacted Sexual Deepfake Laws

Next
Next

California Narrows Its Model-Level AI Proposal