For anyone building or implementing AI for HR or hiring
Tuesday, September 12, 2017 at 9:00AM
Steve in AI, HR, HR Tech, Technology, Technology

You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

An quick excerpt from the piece, (but you really should read the entire thing)

Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.

Article originally appeared on Steve's HR Technology (http://steveboese.squarespace.com/).
See website for complete article licensing information.