We’ve all seen the stories about Amazon shutting down an AI-powered recruitment system because it discriminated against women.The story made a lot of news because of Amazon, but evidence that products with decision-making capabilities show bias against one or more minority groups has been gathering for some time. Less well known is a recent lawsuit filed by the ACLU against Facebook that alleges the company allowed several employers to target job ads at male users only. The scope of the problem remains unclear. But it’s widespread enough that governments everywhere are stepping in to try and correct it.
The Illinois legislature is debating a bill (HB3415) that makes it illegal to make hiring decisions based on analytics that are based on data that correlates with an applicant’s race or zip code. If it becomes law, then any employer that hires more than 50 employees annually and uses predictive data analytics to evaluate applicants must develop procedures to ensure that the algorithms involved are compliant, i.e., are not influenced by race or zip code. Similar legislation is being debated in Washington and Massachusetts.
Federal Law — The Algorithmic Accountability Act
In the U.S., Congress is debating a bill that would require employers to test their algorithms and proactively fix anything “inaccurate, unfair, biased, or discriminatory. The bill, if it becomes law, would apply to companies with $50 million or more in revenues or those that hold information on at least a million people or devices, or primarily act as data brokers.
Laws in Other Countries
Legislation regulating the use of AI and requiring developers to demonstrate their systems are unbiased is being developed in many countries. The Japanese government has issued guidelines to increase transparency around how AI makes decisions, including those related to hiring. The British government is drafting new laws and updating current ones to protect citizens from biased AI decisions. Australia has created an “ethics framework” for the development for AI, which will be the basis for laws.
The most overarching legislation being developed is in the European Union, based on the European Industrial Policy on Artificial Intelligence, approved in January. The policy directs the EU’s member states to “identify and take all possible measures to prevent or minimize algorithmic discrimination.” No specific laws have been proposed yet, but hiring will certainly be a key area of focus for any that are developed. As with GDPR, the impact of these will be global.
Article Continues Below
The Impact on Recruiting
AI developers have pretty much had things their own way until now, but trust in tech companies has been eroding, and confidence that they can self-regulate has largely evaporated. So lawmakers are looking to hold the developers of AI products responsible for the decisions made by their products. Part of the problem has to do with the hype surrounding AI, which has been promoted as the stairway to heaven, but as Amazon and others have found, can be the road to hell. The boom in AI-related products will likely slow as a consequence of all the laws likely to be enacted. At a minimum, adoption of these products will slow because employers will need to be convinced that their use of AI in recruiting does not put them in violation of any laws that are passed.
Concerns about bias in AI should be reassuring to recruiters that they are far from going extinct. Recruiting is a process that requires a lot of judgments for making decisions, which are highly susceptible to the biases of those involved, but unlike AI, recruiters are also best placed to detect these and make corrections. Advancements in AI may well make it self-correcting, but we’re not there yet.