On Nov. 10, 2021, New York City’s Council enacted legislation that requires employers to conduct a “bias audit” on artificial intelligence technology used to review, select, rank, or eliminate people for employment or promotion. In less than half a year, it will finally take effect (on Jan. 1, 2023). It’s therefore critical to brush up now on the nature of the law and what it demands of employers.
The required audit must be performed by an independent auditor, and its results must be published on the employer’s website before the employer implements the AI programs.
This is a significant new law. Currently, approximately 83% of U.S. employers use some form of AI in their hiring processes, often using it solely to efficiently find the right candidates for job vacancies. Starting soon, this new legislation means that AI programs not properly constructed and evaluated can soon be interpreted as programs that perpetuate unlawful discrimination — discrimination that is punishable by fines.
Non-complying employers will be subject to a civil penalty of $500 for the first violation and $1,500 for each subsequent one. On top of this, each day the AI tool is used will be considered a separate violation, and each candidate the employer fails to notify is similarly a separate violation.
Looming Legal Problems
Even though employers may have no intention of discriminating against job applicants, unaudited AI programs open them up to potential Title VII discrimination claims. Title VII prohibits not only overt employment discrimination but also policies and practices “that are fair in form, but discriminatory in operation.”
Thus, employers are susceptible to a Title VII lawsuit if they implement a policy of using an AI program to cull through applications that, while neutral on its face, produces an adverse effect on the basis of a protected characteristic, such as race or gender.
The employer would then have to establish that the program did not have a disparate impact or concede that while the program had a disparate impact, it was “job related for the position in question and consistent with business necessity.”
Even if employers can meet this job-related burden, they may have spent more in legal fees than if they had conducted the bias audit that New York now requires.
The use of AI programs could also expose an employer to class-action litigation. By having all candidates’ applications run through a single AI program, employers are creating a situation where it is making employment decisions about its entire applicant pool based on a single selection device — the AI algorithm.
This use of a single selection device may have the practical consequence of exposing AI-using employers to employment discrimination claims and class actions that previously were limited to situations where employers used standardized employment tests as the single selection device.
Filling the Federal Void
With the real potential for discrimination, one might have expected that the federal government would have stepped in by now to regulate the use of AI programs. To date, though, the federal government has not enacted any laws that directly address the use of AI in hiring practices.
Senators Corey Booker, Elizabeth Warren, and several other senators recently called on ehe U.S. Equal Employment Opportunity Commission (EEOC), to proactively investigate and audit AI programs and their effects on protected classes.
And so in October 2021, the EEOC announced it had commissioned an investigation into AI programs used in hiring and other employment decisions. In an effort to oversee and regulate employers’ use of AI programs, the EEOC intends to “launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications.” But so far, the initiative has not published any of its findings.