AI Laws Won’t Help You Deploy Effective AI

Compliance does not equal effectiveness when it comes to emerging technology.

Article main image
Sep 6, 2023

Around the world, legislative bodies are rightfully attempting to regulate artificial intelligence, which Alphabet CEO Sundar Pichai has proclaimed to be more profound to the human condition than fire or electricity. Legitimate, well-intentioned developers and employers have nothing to fear from these regulations, as they nearly all represent low and minimal thresholds of acceptability. Nonetheless, simply complying with emerging AI legislation is insufficient for ensuring that application is truly useful and beneficial.

The most prominent emerging AI law for organizations is likely New York City’s Local Law 144. This law aims to encourage transparency in hiring by requiring organizations to post disparate-impact statistics on their websites to indicate the size of score differences between majority and minority groups.

However, the law doesn’t mandate any threshold as to the size of the group difference disparity. On the contrary, it defines AI in a vague manner; requires small, likely unreliable sample size calculations; and fails to consider whether the tool is job-relevant. As a result, posted calculations will be all but uninterpretable by laypeople and will offer little indication as to whether the AI implementation is fair or beneficial.

New York’s law may encourage employers to use AI tools with low levels of disparate impact, but it may simultaneously discourage the use of beneficial AI if organizations decide that publicly posting results is too risky.

Other than promoting some level of transparency, the law is also duplicative, since longstanding federal legislation already mandates that there must not be significant levels of group differences in hiring results, unless the hiring process is shown to be job-relevant in various technical ways.

There are many other emerging laws from regions including Maryland, California, and Washington, D.C. While these tend to be well-intentioned, it is exceedingly difficult for non-AI experts to craft helpful guidelines for technology that is complex and ever evolving.

The first major hurdle to useful regulation is understanding what AI is and what it is not. Laypeople hear the phrase “artificial intelligence” and immediately conjure Hollywood’s idea of a killer robot sent back in time from the future. The field’s moniker became prominent in the 1950s with statistical approaches that loosely mimicked how the neurons in a brain function, but the term today connotes a broad range of only loosely analogous techniques.

Differentiating AI: The Good vs. The Bad

There are applications of AI that are invasive of individual privacy, that attempt to make high-stakes decisions about our futures, that are black-box and impossible to comprehend, that learn and evolve without human control, that result in biased outcomes, and that rob us of human autonomy and consequence.

And there are also applications of AI that simply allow us to process and understand the world around us in vastly more powerful ways than previously possible, and that have the potential to improve the human condition and give us agency over the natural world in exciting and beneficial new ways.

Compliance is not free. It often imposes reporting requirements, but again, the actual standards tend to be quite low. To be useful for your organization, algorithms should do more than simply not show bias or not invade personal privacy. After all, in hiring, a simple coin flip would accomplish both of those goals, and for no cost at all.

From a business-user perspective, the most critical element beyond fairness and privacy is effectiveness: algorithms need to be good at their job. In employee selection, an algorithm should predict important outcomes such as job performance and retention. Any credible vendor of employment selection algorithms should be able to provide clear and compelling evidence supporting the tool’s effectiveness.

Moreover, companies using these algorithms should monitor them over time to ensure they continue working as intended. Just because an algorithm produced results in a training data set does not mean it will continue to produce results in the real world. The challenge with evaluating algorithmic effectiveness is that outcome data (we call it criterion data in psychometrics) is difficult to come by. But you can get it with planning and diligence.

And beyond these elements, consider factors such as visibility (is the function of the algorithm clear and known to all affected parties?), transparency (can the workings of the algorithm be understood?), and optimization (is the algorithm economical and can it be improved?). Ultimately, it is this combination of factors that drives algorithm usefulness, and these go far beyond the minimal standards imposed by emerging legislation.

AI techniques allow us to quantify and study the big data world around us, and offer potential for stunning human progress, but only to the extent that these tools are carefully controlled. Following best practices in hiring will ensure your organization is hiring the best quality candidates, increasing organizational performance and diversity, and building an ethical, humane employment brand. And while doing so, you will likely far exceed the minimum thresholds required by emerging legislation.

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!