Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in AI (4)

    Wednesday
    Nov152017

    Self-driving bus crashes, proving all buses should be self-driving

    In case you missed it, a fairly significant pilot of self-driving vehicles, in this case shuttle buses, launched last week in Las Vegas. In this test, shuttle buses developed by French company Navya ARMA will carry passengers along a half-mile route in downtown Las Vegas, (that part of Vegas that most of us who go to Vegas for Conference and conventions tend to ignore). The Navya ARMA buses rely on GPS, cameras, and light-detecting sensors in order to navigate the public streets. According to reports, the year long test hopes to shuttle about 250,000 passengers up and down the Vegas streets.

    Pretty cool, right?

    Guess what happened in the first couple of hours after launching the self-driving pilot program?

    Yep, a CRASH.

    The first self-driving bus was in a minor accident within a couple of hours of the service's launch when a (human driven) delivery truck failed to stop in time and collided with the stationary shuttle bus.

    According to a spokeperson from the American Automobile Association, "The truck making the delivery backed into the shuttle which was stopped. Human error causes most traffic collisions, and this was no different."

    No one was hurt, the damage was minor, and the self-driving pilot program continues in Las Vegas.

    Why bring this up, especially on a blog that at least pretends to be about work, HR, HR Tech, etc.?

    Because these kinds of technology developments, of self-driving vehicles, robots that can sort and organize inventory in warehouses, robots that will greet and provide basic customer services in retail environments and hotels, are being developed, improved, and deployed at increasing rates and in more and more contexts.

    Self-driving technology in particular, especially for commercial vehicles, is by some estimates within 10 years of becoming a mainstream technology, potentially displacing hundreds of thousands of commercial truck drivers. And as an aside, this piece describes how the trucking industry is clearly not ready for this and other technological disruptions.

    This is not meant to be another, tired, 'Robots are taking our jobs' post, but rather another reminder that technology-driven disruption will continue to change the nature of work, workplaces, and even our own ideas about the role of people in work and the economy. And HR and HR tech leaders have to take a leading role in how, where, when, and why their organizations navigate these changes, as they sit directly at the intersection of people, technology, and work.

    And lastly, if that Las Vegas delivery truck had been equipped with the same kinds of self-driving tech that the Nayva ARMA bus has, there is almost no chance there would have been an accident.

    But it might have be fun if it happened anyway. I'd love to see two 'robot' trucks argue with each other on the side of the road about which one was the doofus who caused the accident.

    Have a great day!

    Wednesday
    Nov082017

    Looking for bias in black-box AI models

    What do you do when you can't sleep?

    Sometimes I watch replays of NBA games, (how about my Knicks?), and sometimes I read papers and articles that I had been meaning to get to, but for one reason or another hadn't made the time.

    That is how I spent an hour or so with 'Detecting Bias in Black-Box Models Using Transparent Model Distillation', a recently published paper by researchers at Cornell, Microsoft, and Airbnb. I know, not exactly 'light' reading.

    Full disclosure, I don't profess to have understood all the details and complexity of the study and research methods, but the basic premise of the research, and the problem that the researchers are looking to find a way to solve is one I do understand, and one that you should too as you think about incorporating AI technologies into workplace processes and decision support/making.

    Namely, that AI technology can only be as good and as accurate as the data it’s trained on, and in many cases we end up incorporating our human biases into algorithms that have the potential to make a huge impact on people’s lives - like decisions about whom to hire and promote and reward.

    In the paper, the researchers created models that mimic the ones used by some companies that created 'risk scores', the kinds of data that are used by a bank to decide whether or not to give someone a loan, or for a judicial administration to decide whether or not to give someone early parole. This first set of models is similar to the ones that these companies use themselves.

    Then the researchers create a second, transparent, model that is trained on the actual outcomes that the first set of models are designed to predict - whether or not the loans were paid back and whether or not the parolee committed another crime. Importantly, these models did include data points that most of us, especially in HR, are trained to ignore - things like gender, race, and age. The researchers do this intentionally, and rather than me try to explain why that is important, read through this section of the paper where they discuss the need to assess these kinds of 'off-limits' data elements, (emphasis mine):

    Sometimes we are interested in detecting bias on variables that have intentionally been excluded from the black-box model. For example, a model trained for recidivism prediction or credit scoring is probably not allowed to use race as an input to prevent the model from learning to be racially biased. Unfortunately, excluding a variable like race from the inputs does not prevent the model from learning to be biased. Racial bias in a data set is likely to be in the outcomes — the targets used for learning; removing the race input race variable does not remove the bias from the targets. If race was uncorrelated with all other variables (and combinations of variables) provided to the model as inputs, then removing the race variable would prevent the model from learning to be biased because it would not have any input variables on which to model the bias. Unfortunately, in any large, real-world data set, there is massive correlation among the high-dimensional input variables, and a model trained to predict recidivism or credit risk will learn be biased from the correlation between other input variables that must remain in the model (e.g., income, education, employment) and the excluded race variable because these other correlated variables enable the model to more accurately predict the (biased) outcome, recidivism or credit risk. Unfortunately, removing a variable like race or gender does not prevent a model from learning to be biased. Instead, removing protected variables like race or gender make it harder to detect how the model is biased because the bias is now spread in a complex way among all of the correlated variables, and also makes correcting the bias more difficult because the bias is now spread in a complex way through the model instead of being localized to the protected race or gender variables. ŒThe main benefi€t of removing a protected variable like race or gender from the input of a machine learning model is that it allows the group deploying the model to claim (incorrectly) that they model is not biased because it did not use the protected variable.

    This is really interesting, if counter-intuitive to how most of us, (me for sure), would think about how to ensure that AI and algorithms that we want to deploy to evaluate data sets for a process meant to provide decision support for the 'Who should we interview for our software engineer opening? question.

    I'm sure we've seen or heard about AI for HR solutions that profess to eliminate biases like the ones that have existed around gender, race, and even age from important HR processes by 'hiding' or removing the indicators of such protected and/or under-represented groups.

    This study suggests that removing those indicators from the process and the design of the AI is exactly the wrong approach - and that large data sets and the AI itself can and will 'learn' to be biases anyway.

    Really powerful and interesting stuff for sure.

    As I said, I don't profess to get all the details of this research but I do know this. If I were evaluating an AI for HR tool for something like hiring decision support, I probably would ask these questions of a potential provider:

    1. Do you include indicators of a candidate's race, gender, age, etc. in the AI/algorithms that you apply in order to produce your recommendations?

    If their answer is 'No we don't include those indicators.'

    2. Then, are you sure that your AI/algorithms aren't learning how to figure them out anyway, i.e., are still potentially biased against under-represented or protected groups?

    Important questions to ask, I think.

    Back to the study, (in case you don't slog all the way through it). The researchers did conclude that for both large AI tools they examined, (loan approvals and parole approvals), the existing models did still exhibit biases that they professed to have 'engineered' away. And chances are had the researchers trained their sights on one of the HR processes that AI is being deployed in, they would have found the same thing.

    Have a great day!

    Tuesday
    Sep122017

    For anyone building or implementing AI for HR or hiring

    You can't swing a hammer anywhere these days without hitting an 'AI in HR' article, prediction, webinar, talk, or HR conference session. Heck, we will have a fair bit of AI in HR talk at the upcoming HR Technology Conference in October.

    But one of the important elements that the AI in HR pieces usually fail to address adequately, if at all, is the potential for inherent bias, unfairness, or even worse finding their way into the algorithms that will seep into HR and hiring decisions more and more. After all, this AI and these algorithms aren't (yet) able to construct themselves. They are all being developed by people, and as such, are certainly subject, potentially, to these people's own human imperfections. Said differently, what mechanism exists to protect the users and the people that the AI impacts from the biases, unconscious or otherwise, from the creators.

    I thought about this while reading an excellent essay on the Savage Minds anthropology blog written by Sally Applin titled Artificial Intelligence: Making AI in Our Images

    An quick excerpt from the piece, (but you really should read the entire thing)

    Automation currently employs constructed and estimated logic via algorithms to offer choices to people in a computerized context. At the present, the choices on offer within these systems are constrained to the logic of the person or persons programming these algorithms and developing that AI logic. These programs are created both by people of a specific gender for the most part (males), in particular kinds of industries and research groups (computer and technology), in specific geographic locales (Silicon Valley and other tech centers), and contain within them particular “baked-in” biases and assumptions based on the limitations and viewpoints of those creating them. As such, out of the gate, these efforts do not represent society writ large nor the individuals within them in any global context. This is worrying. We are already seeing examples of these processes not taking into consideration children, women, minorities, and older workers in terms of even basic hiring talent to create AI. As such, how can these algorithms, this AI, at the most basic level, be representative for any type of population other than its own creators?

    A really challenging and provocative point of view on the dangers of AI being (seemingly) created by mostly male mostly Silicon Valley types, with mostly the same kinds of backgrounds. 

    At a minimum for folks working on and thinking of implementing AI solutions in the HR space that will impact incredibly important life-impacting decisions like who should get hired for a job, we owe it to those who are going to be effected by these AIs to ask a few basic questions.

    Like, is the team developing the AI representative of a wide range of perspectives, backgrounds, nationalities, races, and gender balanced?

    Or what internal QA mechanisms have been put into place to protect against the kinds of human biases that Applin describes from seeping into the AI's own 'thought' processes?

    And finally, does the AI take into account differences in cultures, societies, national or local identities that us humans seem to be able to grasp pretty easily, but an AI can have a difficult time comprehending?

    Again, I encourage anyone at any level interested in AI in HR to think about these questions and more as we continue to chase more and 'better' ways to make the organization's decisions and interactions with people more accurate, efficient, effective - and let's hope - more equitable.

    Tuesday
    May092017

    Never gets tired, never stops learning

    Sharing another dispatch from the 'robots are coming to take all our jobs away' world with this recent piece from Digiday, "Who needs media planners when a tireless robot named Albert can do the job?".

    The back story of this particular implementation of AI to replace, (or as we will learn, perhaps just augment or supplement human labor), comes from advertising, where the relatively new concept of programmatic digital advertising has emerged in the last few years. Part of the process of getting things like banner ads, Facebook ads, display ads, and even branded video ads in front of consumers involves marketers choosing the type of ads to show, the content of those ads, the days/times to show the ads, and finally the platforms upon which to push the ads to.

    If it all sounds pretty complex to you, then you're right.

    Enter "Albert." As per the Digiday piece once the advertiser, (in this case Dole Foods), set some blanket objectives and goals, then Albert determined what media to invest in at what times and in what formats. And it also decided where to spend the brand’s budget. On a real-time basis, it was able to figure out the right combinations for creative and headlines.  For example, once Albert determined that Dole’s user engagement rate on Facebook was 40 percent higher for mobile than desktop, Albert shifted more budget to mobile.

    The results have been impressive; According to Dole, the brand had an 87 percent in increase in sales versus the prior year.

    Why bring this up here, on a quasi-HR blog?

    Because it highlights really clearly, a real-life example of the conditions of work that are most ripe for automation, (or at least augmentation). Namely, a data-intensive, detailed, and heavy data volume environment that has to be analyzed, a fast-moving and rapidly paced set of changing conditions that need to be reacted to in real-time, (and 24/7), and finally, the need to be constantly assessing outcomes and making comparisons of choices in order to adjust strategies and execution plans to optimize for the desired outcomes.

    People are good at those things. But AI like Albert might be (probably are) better at those things.

    But in the piece we also see the needed and hard-to-automate contributions of the marketing people at Dole as well.

    They have to give Albert the direction and set the desired business goals - sales, clicks, 'likes', etc.

    They have to develop the various creative content and options from which Albert will eventually choose to run. 

    And finally, they have to know if Albert's recommendations actually do make sense and 'fit' with the overall brand message and strategy.

    Let's recap: People - set goals, strategic objectives, develop creative content, and "understand" the company, brand, context, and environment. AI: executes at scale, assesses results in real-time, optimizes actions in order to meet stated goals, and provides openness into the actions it is taking.

    It sounds like a really reasonable, and pretty effective implementation of AI in a real business context.

    And an optimistic one too, as the 'jobs' that Albert leaves for the people to do seem like the ones that people will want to do.