Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in Technology (386)

    Wednesday
    Nov082017

    Looking for bias in black-box AI models

    What do you do when you can't sleep?

    Sometimes I watch replays of NBA games, (how about my Knicks?), and sometimes I read papers and articles that I had been meaning to get to, but for one reason or another hadn't made the time.

    That is how I spent an hour or so with 'Detecting Bias in Black-Box Models Using Transparent Model Distillation', a recently published paper by researchers at Cornell, Microsoft, and Airbnb. I know, not exactly 'light' reading.

    Full disclosure, I don't profess to have understood all the details and complexity of the study and research methods, but the basic premise of the research, and the problem that the researchers are looking to find a way to solve is one I do understand, and one that you should too as you think about incorporating AI technologies into workplace processes and decision support/making.

    Namely, that AI technology can only be as good and as accurate as the data it’s trained on, and in many cases we end up incorporating our human biases into algorithms that have the potential to make a huge impact on people’s lives - like decisions about whom to hire and promote and reward.

    In the paper, the researchers created models that mimic the ones used by some companies that created 'risk scores', the kinds of data that are used by a bank to decide whether or not to give someone a loan, or for a judicial administration to decide whether or not to give someone early parole. This first set of models is similar to the ones that these companies use themselves.

    Then the researchers create a second, transparent, model that is trained on the actual outcomes that the first set of models are designed to predict - whether or not the loans were paid back and whether or not the parolee committed another crime. Importantly, these models did include data points that most of us, especially in HR, are trained to ignore - things like gender, race, and age. The researchers do this intentionally, and rather than me try to explain why that is important, read through this section of the paper where they discuss the need to assess these kinds of 'off-limits' data elements, (emphasis mine):

    Sometimes we are interested in detecting bias on variables that have intentionally been excluded from the black-box model. For example, a model trained for recidivism prediction or credit scoring is probably not allowed to use race as an input to prevent the model from learning to be racially biased. Unfortunately, excluding a variable like race from the inputs does not prevent the model from learning to be biased. Racial bias in a data set is likely to be in the outcomes — the targets used for learning; removing the race input race variable does not remove the bias from the targets. If race was uncorrelated with all other variables (and combinations of variables) provided to the model as inputs, then removing the race variable would prevent the model from learning to be biased because it would not have any input variables on which to model the bias. Unfortunately, in any large, real-world data set, there is massive correlation among the high-dimensional input variables, and a model trained to predict recidivism or credit risk will learn be biased from the correlation between other input variables that must remain in the model (e.g., income, education, employment) and the excluded race variable because these other correlated variables enable the model to more accurately predict the (biased) outcome, recidivism or credit risk. Unfortunately, removing a variable like race or gender does not prevent a model from learning to be biased. Instead, removing protected variables like race or gender make it harder to detect how the model is biased because the bias is now spread in a complex way among all of the correlated variables, and also makes correcting the bias more difficult because the bias is now spread in a complex way through the model instead of being localized to the protected race or gender variables. ŒThe main benefi€t of removing a protected variable like race or gender from the input of a machine learning model is that it allows the group deploying the model to claim (incorrectly) that they model is not biased because it did not use the protected variable.

    This is really interesting, if counter-intuitive to how most of us, (me for sure), would think about how to ensure that AI and algorithms that we want to deploy to evaluate data sets for a process meant to provide decision support for the 'Who should we interview for our software engineer opening? question.

    I'm sure we've seen or heard about AI for HR solutions that profess to eliminate biases like the ones that have existed around gender, race, and even age from important HR processes by 'hiding' or removing the indicators of such protected and/or under-represented groups.

    This study suggests that removing those indicators from the process and the design of the AI is exactly the wrong approach - and that large data sets and the AI itself can and will 'learn' to be biases anyway.

    Really powerful and interesting stuff for sure.

    As I said, I don't profess to get all the details of this research but I do know this. If I were evaluating an AI for HR tool for something like hiring decision support, I probably would ask these questions of a potential provider:

    1. Do you include indicators of a candidate's race, gender, age, etc. in the AI/algorithms that you apply in order to produce your recommendations?

    If their answer is 'No we don't include those indicators.'

    2. Then, are you sure that your AI/algorithms aren't learning how to figure them out anyway, i.e., are still potentially biased against under-represented or protected groups?

    Important questions to ask, I think.

    Back to the study, (in case you don't slog all the way through it). The researchers did conclude that for both large AI tools they examined, (loan approvals and parole approvals), the existing models did still exhibit biases that they professed to have 'engineered' away. And chances are had the researchers trained their sights on one of the HR processes that AI is being deployed in, they would have found the same thing.

    Have a great day!

    Wednesday
    Nov012017

    HRE Column: Wrapping up HR Tech, and Looking Forward to 2018

    Once again, I offer my semi-frequent reminder and pointer for blog readers that I also write a monthly column at Human Resource Executive Online called Inside HR Tech that can be found here.

    This month, I take a look back at the recently concluded HR Technology Conference and review some of the key issues, themes, and the implications for the future of HR Tech that I took away from the world's largest gathering of the HR technology community. In the piece,  take a look at some of the more interesting trends and themes in HR tech that we have been hearing about for some time now, and some newer ideas that have emerged in the last year or so. These issues, challenges, and opportunities will demand continuing focus for HR and business leaders in 2018 and beyond, and I imagine will be a big part of my planning for HR Tech in 2018 as well.

    I was really pleased with the energy, insight, and most of all the amazing group of HR leaders who attended HR Tech a few weeks ago, as well as our first-class lineup of speakers and exhibitors. I can't thank you all enough for making this last HR Tech the best event in our history.

    Moving forward, I am incredibly excited to get started working on HR Tech in 2018, and I will be sharing much of the concepts, ideas, and themes during the year on this blog, in the HRE Inside HR Tech column, as well as the HR Happy Hour Show.

    Having said that, here's a taste of the HRE piece:

    The HR Tech Conference held earlier this month serves almost as an annual report card, health check and starting point where HR technology will head in the next year, from the latest developments in mobile, analytics and cloud technology to a look at some of the technologies that are coming next, including artificial intelligence, augmented reality and even blockchain.

    Reflecting on everything that went on at the conference, here are some thoughts about what HR and HRIT leaders should really have top of mind as 2017 winds down and organizations begin planning for 2018.

    Cloud, Mobile, Analytics: Not "If?" but "When?"

    If you look back over the past few years of HR-technology-trends articles, you'd find that the migration of HR systems to the cloud, adoption and greater rollout of mobile HR solutions, and an increased focus on HR analytics were mentioned in just about every piece. As the 2017 HR Tech Conference clearly demonstrated, all these trends/predictions starting in 2010 or so have been (or are in the process of being) realized in most organizations and by most HR technology providers.

    The potential for increased HR innovations that arise from having a solid foundation of core HR systems is being realized by organizations of all sizes. And that is an important point as well. A quick check of the many cloud-based HR technologies that are specifically targeting and serving small- and mid-market businesses reveals that most innovative HR technologies are available to almost at any scale. And these so-called mid-market solutions have mostly been built from the ground up -- with cloud, mobile and analytics at their core.

    Wellness, Experience, Productivity

    During Josh Bersin's closing keynote at HR Tech, he talked about a couple of key trends that are combining to shape and direct more organizational attention and resources to employee and organizational wellness. The first is the idea of the overwhelmed employee: one who is barraged by a combination of incessant interruptions from email and smartphone notifications and apps, highly complex business systems and processes, and a general increase in working hours which all compound the challenge of achieving work/life balance. One of the strategies that HR leaders and organizations are increasingly adopting (and applying associated technology solutions to support these strategies) is more thoughtful and measurable programs to address and improve employee well-being...

    Read the rest at HR Executive Online...

    If you liked the piece you can sign up over at HRE to get the Inside HR Tech Column emailed to you each month. There is no cost to subscribe, in fact, I may even come over and re-surface your driveway, take your dog for a walk, rake up your leaves, and eat your leftover Halloween candy.

    Have a great day and Happy First Day of November!

    Thursday
    Oct192017

    Digital Talent Profiles and the Blockchain

    I'm still unwinding a bit from last week's HR Tech Conference, and one of the things I like to think about after the event is more of a question I suppose. Namely, 'Where there any trends or new technologies that we should have paid more attention to at the event, and should be featured next time?'

    About a two or three weeks before the event, a friend of mine contacted me to inquire if we (the Conference), was going to showcase any Blockchain technology, and how this developing tech can or will be used in HR, Talent, or Recruiting. My short answer was 'no', as I had not really seen or heard much on that front in 2017, no one (that I can recall), specifically pitched me any blockchain powered tools to review, and frankly, I only kind of understand what the whole thing is about myself.

    For folks who may have no idea what I am talking about, from our pals at Wikipedia on the Blockchain:

    A blockchain is a continuously growing list of records, called blocks, which are linked and secured using cryptography.Each block typically contains a hash pointer as a link to a previous block,a timestamp and transaction data. By design, blockchains are inherently resistant to modification of the data. A blockchain can serve as "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. 

    This makes blockchains suitable for the recording of events, medical records, and other records management activities, such as identity management,transaction processing, documenting provenance, or food traceability

    That doesn't seem too tough to understand, right?

    A data repository that is secure, verifiable, can record and store all kinds of data types, and can be widely distributed and shared.

    Thinking about it that way, there certainly seems like their would be or could be some applications of this technology in HR and talent technologies.

    Before we jump to that, check out this example of how a form of Blockchain is being applied in the Higher Ed space - as a way to electronically distribute and validate student credentials and degrees:

    The Massachusetts Institute of Technology is offering some students the option to be awarded tamper-free digital degree certificates when they graduate, in partnership with Learning Machine. Selected students can now choose to download a digital version of their degree certificate to their smartphones when they graduate, in addition to receiving a paper diploma.

    Using a free, open-source app called Blockcerts Wallet, students can quickly access a digital diploma that can be shared on social media and verified by employers to ensure its authenticity. The digital credential is protected using block-chain technology. The block chain is a public ledger that offers a secure way of making and recording transactions, and is best known as the underlying technology of digital currency Bitcoin

    An interesting application of Blockchain to share and allow the verification of student degrees by employers, banks, and whomever else would need access to a student's degree information.

    To jump back to HR/Talent, it makes perfect sense then that a similar Blockchain protected employee talent profile could be created for an individual person that could include not only the degree and academic information like in the MIT example, but also work products, verifiable job histories, certifications and skills assessments, and maybe even things like recommendations and testimonials. And all stored in a secure and distributed way - perhaps a way for a candidate to share their profiles with a number of companies at once without having to go through tedious and repetitive job applications for each one. Or maybe in some kind of talent repository for temp, gig, and contract workers to submit their availability and credentials in talent marketplaces.

    There are probably going to be lots more applications of Blockchain in enterprises coming soon, and I will be on the lookout for innovative HR and talent applications for next year's HR Tech.

    If you are a provider doing something interesting in this 'Blockchain for HR' space, get in touch, I'd be interested in learning more.

    Have a great day!

    Tuesday
    Sep122017

    For anyone building or implementing AI for HR or hiring

    You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

    But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

    I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

    An quick excerpt from the piece, (but you really should read the entire thing)

    Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

    A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

    At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

    Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

    Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

    And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

    Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.

    Monday
    Sep112017

    VOTE! For the Next Great HR Technology Company

    Quick shot for a busy Monday and a humble appeal to enlist the help and support of blog readers with something that is equal parts fun, cool, important, and did I mention fun?

    Folks that read the blog should know that I am the Program Chair for the HR Technology Conference - the original, largest, and best event of its kind in the world.

    A featured element at the HR Tech Conference is the incredibly fun 'Discovering the Next Great HR Technology Company' session - where a group of highly innovative and game-changing HR Tech startup companies demo, discuss, and defend their solutions and make their case to be named the 'Next Great HR Tech Company'.

    But which HR tech companies will get their chance to vie for the coveted title?

    That's where you come in.

    For the last few months I have worked with a team of industry experts - George Larocque, Lance Haun, Madeline Lauranon, and Ben Eubanks to narrow down a field of 150+ HR Tech startups down to a group of 8 semi-finalists who wlll battle for the 'Next Great HR Technology Company' title.

    Each of the above mentioned experts has nominated and coached two HR Tech startups, and in classic 'March Madness' style these 8 will fight for a place in the Final Four that will present at HR Tech in October.

    And we want you to decide which of these HR tech startups will make the Final Four.

    How do you make your voice heard?

    Head over to the HR Technology Conference Insiders blog here. There, you can learn more about the 8 semi-finalist companies - Proxfinity, Papaya Global, Beamery, Beekeeper, Moovila, Best Money Moves, Blueboard, and bob - and register your votes for the final four companies who will square off at HR Tech.

    Let's make HR Tech Great Again!

    Or something like that. 

    But please, head on over to the HR Technology Conference blog, read up and watch videos from each of the 8 semi-finalists, and vote for your favorites to compete for the coveted title of the Next Great HR Technology Company next month at HR Tech.

    And in case you want to learn more about this process and the 8 companies themselves, give this episode of the HR Happy Hour Show a listen - George Larocque and I break down the process, talk about the 8 semi-finalists, and tell you everything you need to know.

    Have a great week!