Advertisement
Article main image
Dec 14, 2022

A new law in New York City that requires employers to conduct bias audits on automated employment decision tools prior to their use will not take effect on Jan 1, as planned. Instead, enforcement of Local Law 144 of 2021 is currently postponed until April 15 due to the high volume of public comments received. 

The city is currently planning a second hearing for the bill, which would also “require that candidates or employees that reside in the city be notified about the use of such tools in the assessment or evaluation for hire or promotion,” as well as “be notified about the job qualifications and characteristics that will be used by the automated employment decision tool.”

The law essentially aims to ensure the ethical use of artificial intelligence, stipulating that an audit of the tech must be performed by an independent auditor. Additionally, an employer must publish the results on its website before implementing the AI program.

The ramifications of the law are clearly significant given that about 83% of U.S. employers use some form of AI during the hiring process. Employers deemed non-compliant will be subject to a $500 penalty for the first violation and $1,500 for each one that follows. Additionally, as reported earlier, “each day the AI tool is used will be considered a separate violation, and each candidate the employer fails to notify is similarly a separate violation.”

The potential impact of the law has led to the submission of a deluge of public comments, available for view in a 179-page document provided by the city’s Consumer and Worker Protection agency. Below are some excerpts:

Too Brief, Too Broad

“Given that Criteria does NOT use artificial intelligence or machine learning in our assessments, we understand that we are not impacted by this upcoming New York City law. However, the law as it is currently written is very brief, and subsequently very broad…

“[I]f a company in New York City wanted to hire an Accountant, and they used a math test as part of the hiring process, they could potentially be fined based on this law. If a company in NYC wanted to hire an HR Manager who is tolerant and empathetic, and they assessed job candidates for that position with an online test that measured tolerance and empathy, they could potentially be fined under this law. If a company in NYC wanted to hire a leader who is strong in integrity, they could potentially be fined if they used an assessment to hire that person…

“We therefore ask that the law be written to more clearly articulate its true spirit, namely, to reduce or eliminate the use of artificial intelligence and machine learning in selection processes.” — Brad Schneider, Vice President of Strategic Consulting, Criteria Corp.

Unintended Consequence of Adverse Impact

“[We recommend omitting the requirement to publish the selection rates and impact ratios for all categories and instead require a summary statement on adverse impact…Publishing the specific information contemplated by the proposed regulations could inadvertently undermine the goals of the Ordinance.

“For example, it may discourage applicants from groups that are selected less frequently from applying to an organization at all, hampering efforts to attract a diverse workforce. Moreover, requiring the public disclosure of such specific information could disincentivize companies from conducting thorough audits to avoid possible results that may not be optimal.” — Matt Scherer, Senior Policy Counsel for Workers’ Rights Privacy and Data Project, BSA The Software Alliance; and Ridhi Shetty, Policy Counsel, Privacy and Data Project, BSA The Software Alliance

A Barrier to Finding Talent

“It is essential that these regulations are implemented by New York City in a manner that does not impose overly broad requirements, which in turn could create significant uncertainty regarding the use of automated tools in hiring.

“Potential limitations of the use of technology for hiring purposes for businesses could lead to unnecessary barriers to finding qualified candidates for a job; this is particularly challenging during periods when we see both labor shortages and increases in the labor market, as businesses are put in a position where they receive more resumes/applications than they have the capability to review, which inhibits their ability to identify potential candidates. The use of automated employment decision tools is essential in helping streamline the hiring and promotion process.” — Tom Quaadman, Executive Vice President, Chamber Technology Engagement Center, U.S. Chamber of Commerce 

The Problem of Accuracy

“If certain groups opt out of providing demographic information at higher rates than others, the resulting audits would be inaccurate and misleading. This problem would be particularly acute in smaller sample sizes, and is compounded by the requirements to conduct intersectional analysis, which further diverges from existing U.S. standards. If companies have a small pool of relevant job postings or applicants, aberrations that are not otherwise statistically significant could paint a picture of an employer that does not accurately reflect reality.

“Would the Department consider a trigger that ensures these audits would be required only in cases where the volume or quality of data available would produce useful and actionable audits that provide statistically significant information?” — Christopher Gilrein, Executive Director, Northeast TechNet 

Human vs AI Decisions

“We encourage the final regulations to explicitly exclude automated tools that carry out human-directed assessments. For example, some employers might use a scheduling tool that captures employee availability for purposes of both shift scheduling and candidate evaluation. That tool may ‘generate a prediction” where ‘a computer … identifie[d] the inputs’ and decided the ‘relative importance placed on those inputs … in order to improve the accuracy of the prediction or classification,’ which involved “inputs and parameters” that were “refined through crossvalidation or by using training and testing data.’ And the purpose of this tool may be ‘to screen candidates for employment’ by virtue of determining and calculating how to best schedule individuals to work.

“Moreover, with large candidate pools, employers rely on automation to screen candidates based on core job-related decision points such as educational attainment or relevant licensure. Presumably, these types of tools are not intended to be covered by this law, …and the continued use of such tools would be untenable in the face of the requirements and penalties.” — Emily M. Dickens, Chief of Staff & Head of Government Affairs, SHRM

A Vendor-Employer Conundrum

“Because the guidance assumes it will be produced by employers, who may not have access to match scores at scale provided by their vendor, many of them have requested copies of this report from their vendors for their own websites. This presents a catch-22 for the vendors, in that they largely intentionally limit their collection practices for demographic candidate data, in order to comply with conflicting laws that require data minimization and candidate privacy like the GDPR.

“In the absence of self-reported data at application time, demographic data is difficult (and costly) to obtain. Candidates are often reluctant to provide this data post-hoc, as is well known in the financial industry who are permitted to collect this data only after applying to credit applications for compliance with fair lending laws, resulting in very low survey response rates.” — The Parity Team

Failure to Address Sourcing

“The proposed rules do not address applicant sourcing. The pool of applicants an employer reviews, as well as the method the employer uses to generate, or ‘source,’ the pool, can contribute to bias in the hiring process. For example, suppose an employer manages to (unfairly) target its recruiting so that it attracts an overwhelmingly White applicant pool. These applicants then pass through a resume-filtering AEDT screened out at comparable rates across race groups. The result will be an overwhelmingly White set of hires.

“Given the proposed definition of a Bias Audit, this would not be seen as a problem. Conversely, when the labor pool for a given job is unbalanced, Bias Audits that address sourcing would effectively incorporate this context. For instance, if 80% of security guards are male, then we should not be surprised if applicant pools for security guard jobs skew male.” — ORCAA

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!
Advertisement