Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed
    « PODCAST: #HRHappyHour 296 - HR Gives Back 2017 | Main | VOTE! For the Next Great HR Technology Company »
    Tuesday
    Sep122017

    For anyone building or implementing AI for HR or hiring

    You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

    But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

    I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

    An quick excerpt from the piece, (but you really should read the entire thing)

    Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

    A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

    At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

    Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

    Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

    And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

    Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.

    PrintView Printer Friendly Version

    EmailEmail Article to Friend

    References (1)

    References allow you to track sources for this article, as well as articles that were written in response to this article.

    Reader Comments (4)

    I hadn't thought about the creator of the HR AI influencing the program so much that it would be biased, but that makes sense. Further, I assume that these types of software will give users the option to customize the way the candidates are filtered out. This could lead to biases creeping in, as well. I think the best way is to use an ATS and collaborative hiring (see: http://bit.ly/2h3kiuI). This way, you reduce the chances of biases (unconscious and otherwise) swaying hiring decisions. You can clearly see who is the best fit for the position based on clearly laid-out skillsets determined by the whole team (more eyes = less particular bias).

    While this may not be true for all of the white, male software engineers working together, it may be the push employers need that tout diversity. They already have the right mindset, they just need to put it to action. Thanks for sharing! Super interesting.

    September 13, 2017 | Unregistered CommenterBeth

    I hadn't thought about the creator of the HR AI influencing the program so much that it would be biased, but that makes sense. Further, I assume that these types of software will give users the option to customize the way the candidates are filtered out. This could lead to biases creeping in, as well. I think the best way is to use an ATS and collaborative hiring (see: http://bit.ly/2h3kiuI). This way, you reduce the chances of biases (unconscious and otherwise) swaying hiring decisions. You can clearly see who is the best fit for the position based on clearly laid-out skillsets determined by the whole team (more eyes = less particular bias).

    While this may not be true for all of the white, male software engineers working together, it may be the push employers need that tout diversity. They already have the right mindset, they just need to put it to action. Thanks for sharing! Super interesting.

    September 13, 2017 | Unregistered CommenterBeth

    Wow post. you guys doing very good..Keep sharing like this blog. For more HR industry updates blog click here: https://www.thecareermuse.co.in/

    September 20, 2017 | Unregistered CommenterSumit

    Hi Steve, thanks for the kind words on my post. Happy to speak at your HR conference on this. Cheers, Sally Applin

    September 30, 2017 | Unregistered Commentersally applin

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>