Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in AI (9)

    Monday
    May142018

    Questions to ask before letting an algorithm make HR decisions

    Nearing the halfway mark in 2018 and I am ready to call it right now - the topic/trend that has and will continue to dominate the HR and HR technology discussion this year is Artificial Intelligence or AI.

    I will accept my share of the responsibility and blame for this no doubt. I have hit the topic numerous times on the blog, I have programmed at least seven sessions (or more) featuring AI topics for the upcoming HR Technology Conference, and the subject comes up on just about every HR Happy Hour Podcast at one point or another. In fact, one of my favorite HR Happy Hour Shows this year was the conversation I had with author and professor Joshua Gans on his new book titled Prediction Machines: The Simple Economics of Artificial Intelligence.

    So if you are thinking that everyone in HR and HR tech is all in on AI you'd probably be right. And yet even with all the attention and hype, at some level I still wonder if we are talking about AI in HR enough. Or more specifically, are we talking about the important issues in AI, and are we asking the right questions before we deploy AI for HR decision making?

    I thought about this again after reading an excellent piece on this very topic, titled 'Math Can't Solve Everything:Questions We Need to be Asking Before Deciding an Algorithm is the Answer' on the Electronic Frontier Foundation site. In this piece, (and you really should read it all), the authors lay out five questions that organizations should consider before turning to AI and algorithms for decision support purposes.

    Let's take a quick look at the five questions that HR leaders should be aware of and think about, and by way of example, examine how these questions might be assessed in the context of one common 'AI in HR' use case - applying an algorithm to rank job candidates and decide which candidates to interview and consider.

    1. Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives?

    In the piece on EFF, the main example or warning cited when AI-driven processes might negatively impact people's lives is in the use of an algorithm called Compas, which has been used to predict convicted criminals likelihood to become repeat offenders. The potential danger is when the Compas score influences a judge to issue a longer prison sentence to someone the algorithm suggests is likely to repeat offend. But what if Compas is wrong? Then the convicted offender ends up spending more time than they should have in prison. So this is a huge issue in the criminal justice system.

    In our HR example, the stakes are not quite so high, but they still matter. When algorithms or AI is used to rank job candidates and select candidates for interviews, those candidates who are not selected, or are rated poorly, are certainly negatively impacted by the loss of the opportunity to be considered for employment. That does not mean the AI is 'wrong' or bad necessarily, but just that HR leaders need to be open and honest that this kind of AI will certainly impact some people in a negative manner.

    With that established, we can look at the remaining questions to consider when deploying AI in HR.

    2. Can the available data actually lead to a good outcome?

    Any algorithm relies on input data, and the 'right' input data, in order to produce accurate predictions and outcomes. In our AI in HR example, leaders deploying these technologies need to take time to assess the kinds of input data about candidates that are available and that the algorithm is considering, when determining things like rankings and recommendations. This is when we have to ask ourselves additional questions on correlation vs. causation and whether or not one data point is a genuine and valid proxy for another outcome.

    In the candidate evaluation example, if the algorithm is assessing things like educational achievement or past industry experience of a candidate, are we sure that this data is indeed predictive of success for a candidate in a specific job? Again, I am not contending that we can't know which data elements are indeed predictive and valid, but that we should know them, (or at least have really strong evidence they are likely to be valid).

    3. Is the algorithm fair?

    At the most basic level, and the one that has the most applicability for our AI in HR example, HR leaders deploying AI have to assess whether or not the AI is fair - and the simplest way is to review if the algorithm is treating like groups similarly or disparately? Many organizations are turning to AI-powered candidate assessment and ranking processes to try to remove human bias from the hiring process and attempt to ensure fairness. HR leaders, along with their technology and provider partners have the challenge and responsibility to validate this is actually happening. 'Fairness' is a simple concept to grasp, but can be extremely hard to prove, but one that is inherently necessary in order for AI and algorithms to drive organizational and even societal outcomes. There is a lot more we can do to break this down, but for now, let's be sure we know we have, in HR, to ask this question early and often in the AI conversation.

    4. How will the results (really) be used by humans?

    If you deploy AI and algorithms for the purposes of ranking candidates, how will you use the AI-generated rankings? Will they be the sole determinant of which candidates get called for interviews, advance in the hiring process, and ultimately have a chance at an offer? Or will the AI rankings be just a part of the consideration and evaluation criteria for candidates, to be supplemented by 'human' review and judgement?

    One of the ways the authors of the EFF piece suggest to ensure that human judgement is still a part of the process, is to engineer the algorithms in such a manner that they don't produce a single numerical value, like a candidate ranking score, but rather a narrative report and review of the candidate that a human HR person or recruiter would have to review. In that review, they would naturally apply some of their own human judgement. Bottom line, if your AI is meant to supplement humans and not replace them, you have to take active steps to ensure that is indeed the case in the organization.

    5. Will people affected by these decisions have any influence over the system?

    This final question is perhaps the trickiest one to answer for our AI in HR example. Job candidates who are not selected for interviews as a result of a poor or lower relative AI-driven ranking, will almost always have very little ability to influence the system or process. But rejected candidates often have valid questions as to why they were not considered for interviews and seek advice on how they could work to strengthen their skills and experiences in order to improve their chances for future opportunities. In this case, it would be important for HR leaders to have enough trust and visibility into the workings of the algorithm in order to precisely understand where any given candidate was ranked poorly. This ability to see the levers of the algorithm at work, and be able to share them in a clear and understandable manner is what HR leaders have to push their technology partners on, and be able to provide when needed.

    As we continue to discuss and deploy AI in HR processes, we have to also continue to evaluate these systems and ask these and other important questions. HR decisions are big decisions. They impact people's lives in important and profound ways. They are not to be taken lightly. And if some level of these decisions are to be trusted to an algorithm, then HR leaders have to hold that algorithm (and themselves), accountable.

    Have a great week!

    Monday
    Apr162018

    PODCAST: #HRHappyHour 319 - HR is About Making Predictions: Understanding AI for HR

    HR Happy Hour 319 - Understanding Artificial Intelligence for Business and HR

    Sponsored by Virgin Pulse - www.virginpulse.com

    Host: Steve Boese

    Guest: Joshua Gans, University of Toronto

    Listen HERE

    This week on the HR Happy Hour Show, Steve is joined by Joshua Gans, Professor of Strategic Management at the University of Toronto, and co-author of the new book, Prediction Machines: The Simple Economics of Artificial Intelligence.

    On the show, Joshua gives his easy to grasp definition of Artificial Intelligence, how AI is really about lowering the costs of and increasing the availability and ability to create more predictions about outcomes. These outcomes could be about predicting tomorrow's weather, teaching a self-driving car how to react to changing conditions, or even helping HR and Talent leaders predict who might be the best candidate for a job, or who might be a better fit on the team.

    Joshua breaks down how HR and business leaders should think about AI, how and where to see and understand its impact on business, the need for human judgment, and how to assess and be aware of the hidden dangers and potential biases in AI technology. This was the most lively and engaging (and accessible) conversation about AI I have ever had, and I think any HR or business leader will appreciate the easy, casual way Joshua explains complex topics.

    We also talked 'War Games', (the movie), Moneyball, the pain of teaching a teenager how to drive.

    Listen to the show on the show page HERE, on your favorite podcast app, or by using the widget player below:

    Thanks to Joshua for joining us!

    Subscribe to the HR Happy Hour Show wherever you get your podcasts - just search for 'HR Happy Hour'.

    And here is the link to Joshua's new book, Prediction Machines: The Simple Economics of Artificial Intelligence.

    Have a great day!

    Monday
    Apr092018

    Is every company soon to be an 'Artificial Intelligence' company?

    A few years back the quote 'Every company is a technology company' made the rounds on social media and in presentations on the workplace, the future of work, and in probably too many TED talks to try and compile.

    But while some work and workplace sayings, at least to me, don't necessarily become any more true just because they are repeated all the time, ('Culture eats strategy for breakfast', I am looking right at you), this notion of just about every kind of organization becoming much more reliant, dependent, and committed to more and more advanced technologies as a means to survive, compete, and thrive still seems valid to me.

    Can you think of any business, small, medium, or large, that has not had its processes, products, services, communications, administration, customer service, and marketing significantly impacted by new technology in the last decade? Aside from perhaps a few of the very smallest, local service businesses, I can't really think of any. And even those kinds of places, say like a local barbershop or pizza joint, are likely to have a 'Follow us on Facebook' or a 'Find us on Yelp' sticker in the window.

    I thought about this idea, of every company being a technology company, again recently when I saw this piece on Business Insider - 'Goldman Sachs made a big hire from Amazon to lead its Artificial Intelligence efforts'. While it isn't surprising or revealing at all to think of a giant financial institution like Goldman being transformed by technology like so many other firms in all industries, this specific focus on AI technology is I think worth noting.

    Here's an excerpt from the piece:

    Goldman Sachs has hired a senior employee from Amazon to run the bank's artificial-intelligence efforts.

    Charles Elkan has joined Goldman Sachs as a managing director leading the firm's machine learning and AI strategies, according to an internal memo viewed by Business Insider.

    Elkan comes from Amazon, where he was responsible for the Artificial Intelligence Laboratory at Amazon Web Services, according to the memo. He previously led the retailing giant's Seattle-based central machine-learning team.

    "In this role, Charles will build and lead a center of excellence to drive machine learning and artificial intelligence strategy and automation, "Elisha Wiesel, Goldman Sachs' chief information officer, wrote in the memo. "Charles will work in partnership with teams across the firm looking to apply leading techniques in their businesses."

    The key element I think to the announcement of Goldman's new AI hire is meant to work with groups across the entire business in order to find ways to apply AI and Machine Learning technologies. Almost as if Goldman is not looking to create the 'AI Department' akin to the classic 'IT Department' that exists in just about every company, but rather to find ways to infuse specific kinds of tech and tech approaches all over the company.

    And thinking about AI in that way, much differently to how most companies have looked at most of the major technological advances in the past is what leads me back to the question and title of the post. If the Goldman, (and plenty of other companies too) example of looking for ways to embed AI technology and techniques all across their businesses, then it is not really a stretch to suggest at least in some ways they are seeking to become 'an AI company' at their core.

    What's been the most significant single technology advance in the last 25 years or so that has done more to change how work and business get done?

    Email?

    The web?

    Mobile phones?

    Probably some combination of these three I would bet. And has any company you have known decided to 'brand' or consider themselves 'an email company?' Or a 'mobile phone' company? 

    Not really, these were just tools to try and get better, more efficient, more profitable being whatever kind of company they really were.

    So I think the answer to the 'AI question' for Goldman, or for anyone else going all in with AI at the moment is 'No', we aren't really trying become an Artificial Intelligence company. We probably should just consider AI and its potential as just another set of tools that can be leveraged in support of what it is we are really trying to do.

    Even if it is tempting to try and create the latest management/workplace axiom.

    Have a great week! 

    Thursday
    Mar082018

    CHART OF THE DAY: The Rise of the Smart Speaker

    There is pretty good evidence that the rate of mainstream adoption of new technologies is significantly more rapid than it has been in the past. It took something like 60 or 70 years for the home-based, land line telephone to achieve over 90% penetration in US homes once the technology became generally available.

    Fast forward to more recent technology innovations like the personal computer or the mobile phone and time for widespread adoption has diminished to just a couple of decades (if not less for modern tools and solutions like social media/networking apps).

    New tech, when it 'hits', hits much faster than ever before and its adoption accelerates across mainstream users much faster as well. Today's Chart(s) of the Day, courtesy of some research done by Voicebot.ai show just how prevalent the smart speaker, a technology almost no one had in their homes even two years ago, have become.

    Chart 1 - Smart Speaker Market Penetration - US

     

    About 20% of US adults are in homes that have one of these smart speakers enabled. It may not sound like much, but think about it - how many times had you seen one of these say as recently as 2016?

    Chart 2 - Smart Speaker Market Share - US

    No surprise, to me at least, that Amazon has the dominant position in the US in terms of smart speakers. They beat their competitors to this market, and their platform, Alexa, has become pretty synonymous with the entire voice assistant technology. If I were a company looking to develop solutions for voice, I would start with Alexa for sure.

    Once people, in their 'real lives' begin to adopt a technology solution in large numbers, they begin to seek, demand, and expect these same kinds of technologies will be available and tailored to their workplace needs as well. The data shows that smart speakers like the Echo and the Google Home device are gaining mainstream adoption really, really quickly.

    If your organization has not yet started to think about how to deploy services, information, and access to organizational information via these smart speakers and their platforms like Alexa I wouldn't say you are late, but you are getting close to being late.

    Better to be in front of a freight train rolling down the line than it is to get run over by it.

    Last note - stay tuned for an exciting announcement in this space from your pals at the HR Happy Hour Show.

    Tuesday
    Feb062018

    Automated narratives

    We are soon going to reach, if we haven't yet, 'Peak Artificial Intelligence' I think.

    There have been a million examples of 'AI will replace XYZ' or 'AI for 'Insert your favorite process here'' pieces and developments in the last couple of years, and if you and your organization is not at least thinking about incorporating AI into your business processes, well, the conventional thinking goes, you are going to be left behind. I suppose time will tell on that. I think the adage (was it from Bill Gates?), that we tend to overestimate the impact of new technology in the short term, and underestimate its impact in the long term probably applies to AI as well. AI is definitely coming to a business process near you, it is just a little unclear how long it will be and how much impact it will have on your organization, people, and business.

    But one fairly common theme in all the talk about AI (and automation more generally), is that it will effect and potentially replace more mundane, repetitive, rules-heavy, and precisely defined processes and roles (at least initially), while leaving creative, nuanced, complex, and more sophisticated processes and roles to the humans, (at least for now). Robots are going to take the wareghouse jobs and maybe some/most of the cashier jobs, but 'creative' types like marketers and advertising folks for example would be largely safe from automation. While Watson can win ay Jeopardy! and Google can build a machine to win at Go, no AI can come up with say, one of the amazing ads we just saw on the Super Bowl. Right?

    But wait...

    Check out this excerpt from a piece on Ad Week - 'Coca-Cola Wants to Use AI Bots to Create Its Ads'

    Coca-Cola is one of the most beloved brands in the world and is known for creating some of the best work in the advertising industry. But can an AI bot replace a creative? Mariano Bosaz, the brand’s global senior digital director, wants to find out.

    “Content creation is something that we have been doing for a very long time—we brief creative agencies and then they come up with stories that they audio visualize and then we have 30 seconds or maybe longer,” Bosaz said. “In content, what I want to start experimenting with is automated narratives.”

    In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media. “That’s a long-term vision,” he said. “I don’t know if we can do it 100 percent with robots yet—maybe one day—but bots is the first expression of where that is going.

    It is one thing when a manufacturing executive states that he or she wants to automate some or most aspects of a manufacturing or assembly process and reduce levels of human employment in favor of technology - we are coming to expect that robots and tech and AI are simply inevitably going to do those jobs in the future.

    But it is kind of a different thing entirely to hear a 'creative' executive from one of the world's largest companies and most recognized brands to openly discuss how technology like AI can and probably will begin to take over some or even most parts of a highly creative, expressive process like developing advertising content. We don't, or at least I don't, like to think of these kinds of tasks and jobs as ones that could also fall into the category of 'We are better off having a robot do that'. I mean, (trying) to be creative is mostly how I make a living. Emphasize the 'trying' part.

    'Automated narratives', for some reason that term stuck out for me when I read the Ad Week piece. Hmm. Probably need to think about that a little longer.

    But while I am pondering, I will end with the disclaimer that this post, (and so far, all the posts on this blog), was 100% produced by a person. Although some days I wish I had access to a blog-writing 'bot.

    Have a great day!