Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.

    free counters

    Twitter Feed

    Entries in AI (11)

    Monday
    Jan282019

    A deep dive into the impact of automation and technology on jobs

    I spent some time over the weekend, (I know, I probably need some other hobbies), reviewing a new report from the Brookings Institute titled 'Automation and Artificial Intelligence: How Machines are Affecting People and Places'. In the report, Brookings sought to examine (like plenty of other organizations in the last few years), what the potential impact of advanced technologies, automation, and AI will be on the labor market. Mainly, which kinds of jobs and in what areas are more or less likely to be affected, changed, and potentially replaced as technology continues to improve and advance.

    There is a ton of interesting information in the 100+ page report, and for those of you who are interested in this sort of thing, I would block some time to go through it all, but for those who may want a shorter, TL;DR version, here are what are to me, the three most interesting findings/conclusions/takeaways from the report.

    1. Contrary to a lot of hype and hysteria around automation and AI, most jobs are not highly susceptible to automation. Take a look at the key finding from Brookings in the chart below:

    While almost no single occupation will be completely unaffected by the adoption of new technology, the impact on jobs will be of varying intensity and significant for only about 25% of jobs. Another 52 million or so jobs, about 36% of the labor force will see some or medium impact from new tech. And another 39% of jobs will only see a low impact from new technology. This disparate impact on jobs reminds me of the old saying, 'The future has already arrived, it's just not evenly distributed'.

    2. Lower wage jobs are, on average, more exposed to potential automation. Within the variability expressed in the first chart, Brookings tries to break down what kinds of jobs, at what kind of wages, and in which geographic areas are most prone to be impacted by technology. Here's a look at the data around wage level and potential impact from automation:

    The main driver behind lower wage jobs being more susceptible to automation is the tendency for jobs made up largely of routine, predictable physical and cognitive tasks are the ones most vulnerable to automation in the short and medium term. Think jobs like office administration, simple production, and food preparation. So according to Brookings the roles that now tend to pay the lowest wages are at the most risk. The danger of this of course is that the people holding these jobs also tend to be the least prepared to make a job shift into roles that are more complex, higher up the wage scale, and less likely to be impacted by technology.

    3. In addition to varying widely across types of jobs and wage level, automation of jobs is likely to vary widely by location as well. The larger relative impact will be felt, according to Brookings, in smaller, and more rural areas. See the data below:

    There's a lot of detailed data to parse through there, but basically workers in smaller and more rural communities are about 10 percentage points more likely to have their jobs adversely impacted by technology than workers in urban areas. This could be a by-product of the continuing challenge that smaller communities have in keeping their skilled and younger workers from leaving to seek better opportunities in larger towns and cities.

    Since this is a long post already, I will leave covering what the folks at Brookings suggest can be done by localities, companies, education, and people in order to be better prepared for the ongoing waves of automation. Suffice to say though that understanding the problem and challenge is the important first step to solving it.

    Take some time to look at the whole report if you can.

    Have a great week!

    Wednesday
    Dec122018

    Job titles of the future: Chief Ethical and Humane Use Officer

    If 2018 was the 'Year of AI' in enterprise technology, 2019 is shaping up to be Year 2 I would suspect. The development, growth, spread, and seeming ubiquity of technology providers touting their AI and Machine Learning powered solutions is showing no signs of slowing as we end 2018. As with any newer or emerging technology, the application of AI technologies offer great promise and potential benefits, but also can lead to some unexpected and even undesirable outcomes, if not managed closely and effectively.

    One leading enterprise technology company, Salesforce, is acting more proactively than most AI players in recognizing the potential for negative applications of AI tools, and is taking steps to address them, most notably by creating and hiring for a new position, today's 'Job Title of the Future' the 'Chief Ethical and Humane Use Officer.' 

    Details from reporting on Business Insider on the new appointment:

    In the midst of the ongoing controversies over how tech companies can use artificial intelligence for no good, Salesforce is about to hire its first Chief Ethical and Humane Use officer.

    On Monday, Salesforce announced it would hire Paula Goldman to lead its new Office of Ethical and Humane Use, and she will officially start on Jan. 7. This office will focus on developing strategies to use technology in an ethical and humane way at Salesforce. 

    "For years, I've admired Salesforce as a leader in ethical business,” Goldman said in a statement. “We're at an important inflection point as an industry, and I'm excited to work with this team to chart a path forward."

    With the development of the new Office of Ethical and Humane Use, Salesforce plans to merge law, policy and ethics to develop products in an ethical manner. That's especially notable, as Salesforce itself has come under fire from its own employees for a contract it holds with U.S. Customs and Border Protection.

    A C-Level hire with the remit to develop strategies to use tech in an ethical and humane way is a pretty interesting approach to the challenges of increasingly powerful AI powered technologies being let loose in the world. Most of the time, enterprise tech companies sell or license their technologies to end customers who are then more or less free to apply these technologies to help them solve their own business challenges. The technology providers typically have not waded into making value judgements on their customers or the ways that the technologies are being applied to the customers' ends.

    What Salesforce seems to be indicating is that they intend to be more intentional or even careful about how their technologies are used in the market, and want to signal their desire to ensure they are used in an ethical and humane way.

    This to me is a really interesting development in how technology (or potentially any kind of product producer), may need to look at how their products are used by customers. This role at Salesforce is focused on AI technologies, probably because AI seems to be an area ripe with the potential for misuse. But AI tools and technologies are by no means the only product that once unleashed on the market can drive negative outcomes. Here's a short and incomplete list: firearms, soda, fast food, tobacco products, cars that drive 150MPH, skinny jeans, and on and on.

    Will this appointment of a Chief Ethical and Human Use Officer at Salesforce mark the start of a new trend where product creators take a more active role in how their products and solutions are applied?

    We will see, I guess, it will be interesting to watch.

    Have a great day!

    Monday
    May142018

    Questions to ask before letting an algorithm make HR decisions

    Nearing the halfway mark in 2018 and I am ready to call it right now - the topic/trend that has and will continue to dominate the HR and HR technology discussion this year is Artificial Intelligence or AI.

    I will accept my share of the responsibility and blame for this no doubt. I have hit the topic numerous times on the blog, I have programmed at least seven sessions (or more) featuring AI topics for the upcoming HR Technology Conference, and the subject comes up on just about every HR Happy Hour Podcast at one point or another. In fact, one of my favorite HR Happy Hour Shows this year was the conversation I had with author and professor Joshua Gans on his new book titled Prediction Machines: The Simple Economics of Artificial Intelligence.

    So if you are thinking that everyone in HR and HR tech is all in on AI you'd probably be right. And yet even with all the attention and hype, at some level I still wonder if we are talking about AI in HR enough. Or more specifically, are we talking about the important issues in AI, and are we asking the right questions before we deploy AI for HR decision making?

    I thought about this again after reading an excellent piece on this very topic, titled 'Math Can't Solve Everything:Questions We Need to be Asking Before Deciding an Algorithm is the Answer' on the Electronic Frontier Foundation site. In this piece, (and you really should read it all), the authors lay out five questions that organizations should consider before turning to AI and algorithms for decision support purposes.

    Let's take a quick look at the five questions that HR leaders should be aware of and think about, and by way of example, examine how these questions might be assessed in the context of one common 'AI in HR' use case - applying an algorithm to rank job candidates and decide which candidates to interview and consider.

    1. Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives?

    In the piece on EFF, the main example or warning cited when AI-driven processes might negatively impact people's lives is in the use of an algorithm called Compas, which has been used to predict convicted criminals likelihood to become repeat offenders. The potential danger is when the Compas score influences a judge to issue a longer prison sentence to someone the algorithm suggests is likely to repeat offend. But what if Compas is wrong? Then the convicted offender ends up spending more time than they should have in prison. So this is a huge issue in the criminal justice system.

    In our HR example, the stakes are not quite so high, but they still matter. When algorithms or AI is used to rank job candidates and select candidates for interviews, those candidates who are not selected, or are rated poorly, are certainly negatively impacted by the loss of the opportunity to be considered for employment. That does not mean the AI is 'wrong' or bad necessarily, but just that HR leaders need to be open and honest that this kind of AI will certainly impact some people in a negative manner.

    With that established, we can look at the remaining questions to consider when deploying AI in HR.

    2. Can the available data actually lead to a good outcome?

    Any algorithm relies on input data, and the 'right' input data, in order to produce accurate predictions and outcomes. In our AI in HR example, leaders deploying these technologies need to take time to assess the kinds of input data about candidates that are available and that the algorithm is considering, when determining things like rankings and recommendations. This is when we have to ask ourselves additional questions on correlation vs. causation and whether or not one data point is a genuine and valid proxy for another outcome.

    In the candidate evaluation example, if the algorithm is assessing things like educational achievement or past industry experience of a candidate, are we sure that this data is indeed predictive of success for a candidate in a specific job? Again, I am not contending that we can't know which data elements are indeed predictive and valid, but that we should know them, (or at least have really strong evidence they are likely to be valid).

    3. Is the algorithm fair?

    At the most basic level, and the one that has the most applicability for our AI in HR example, HR leaders deploying AI have to assess whether or not the AI is fair - and the simplest way is to review if the algorithm is treating like groups similarly or disparately? Many organizations are turning to AI-powered candidate assessment and ranking processes to try to remove human bias from the hiring process and attempt to ensure fairness. HR leaders, along with their technology and provider partners have the challenge and responsibility to validate this is actually happening. 'Fairness' is a simple concept to grasp, but can be extremely hard to prove, but one that is inherently necessary in order for AI and algorithms to drive organizational and even societal outcomes. There is a lot more we can do to break this down, but for now, let's be sure we know we have, in HR, to ask this question early and often in the AI conversation.

    4. How will the results (really) be used by humans?

    If you deploy AI and algorithms for the purposes of ranking candidates, how will you use the AI-generated rankings? Will they be the sole determinant of which candidates get called for interviews, advance in the hiring process, and ultimately have a chance at an offer? Or will the AI rankings be just a part of the consideration and evaluation criteria for candidates, to be supplemented by 'human' review and judgement?

    One of the ways the authors of the EFF piece suggest to ensure that human judgement is still a part of the process, is to engineer the algorithms in such a manner that they don't produce a single numerical value, like a candidate ranking score, but rather a narrative report and review of the candidate that a human HR person or recruiter would have to review. In that review, they would naturally apply some of their own human judgement. Bottom line, if your AI is meant to supplement humans and not replace them, you have to take active steps to ensure that is indeed the case in the organization.

    5. Will people affected by these decisions have any influence over the system?

    This final question is perhaps the trickiest one to answer for our AI in HR example. Job candidates who are not selected for interviews as a result of a poor or lower relative AI-driven ranking, will almost always have very little ability to influence the system or process. But rejected candidates often have valid questions as to why they were not considered for interviews and seek advice on how they could work to strengthen their skills and experiences in order to improve their chances for future opportunities. In this case, it would be important for HR leaders to have enough trust and visibility into the workings of the algorithm in order to precisely understand where any given candidate was ranked poorly. This ability to see the levers of the algorithm at work, and be able to share them in a clear and understandable manner is what HR leaders have to push their technology partners on, and be able to provide when needed.

    As we continue to discuss and deploy AI in HR processes, we have to also continue to evaluate these systems and ask these and other important questions. HR decisions are big decisions. They impact people's lives in important and profound ways. They are not to be taken lightly. And if some level of these decisions are to be trusted to an algorithm, then HR leaders have to hold that algorithm (and themselves), accountable.

    Have a great week!

    Monday
    Apr162018

    PODCAST: #HRHappyHour 319 - HR is About Making Predictions: Understanding AI for HR

    HR Happy Hour 319 - Understanding Artificial Intelligence for Business and HR

    Sponsored by Virgin Pulse - www.virginpulse.com

    Host: Steve Boese

    Guest: Joshua Gans, University of Toronto

    Listen HERE

    This week on the HR Happy Hour Show, Steve is joined by Joshua Gans, Professor of Strategic Management at the University of Toronto, and co-author of the new book, Prediction Machines: The Simple Economics of Artificial Intelligence.

    On the show, Joshua gives his easy to grasp definition of Artificial Intelligence, how AI is really about lowering the costs of and increasing the availability and ability to create more predictions about outcomes. These outcomes could be about predicting tomorrow's weather, teaching a self-driving car how to react to changing conditions, or even helping HR and Talent leaders predict who might be the best candidate for a job, or who might be a better fit on the team.

    Joshua breaks down how HR and business leaders should think about AI, how and where to see and understand its impact on business, the need for human judgment, and how to assess and be aware of the hidden dangers and potential biases in AI technology. This was the most lively and engaging (and accessible) conversation about AI I have ever had, and I think any HR or business leader will appreciate the easy, casual way Joshua explains complex topics.

    We also talked 'War Games', (the movie), Moneyball, the pain of teaching a teenager how to drive.

    Listen to the show on the show page HERE, on your favorite podcast app, or by using the widget player below:

    Thanks to Joshua for joining us!

    Subscribe to the HR Happy Hour Show wherever you get your podcasts - just search for 'HR Happy Hour'.

    And here is the link to Joshua's new book, Prediction Machines: The Simple Economics of Artificial Intelligence.

    Have a great day!

    Monday
    Apr092018

    Is every company soon to be an 'Artificial Intelligence' company?

    A few years back the quote 'Every company is a technology company' made the rounds on social media and in presentations on the workplace, the future of work, and in probably too many TED talks to try and compile.

    But while some work and workplace sayings, at least to me, don't necessarily become any more true just because they are repeated all the time, ('Culture eats strategy for breakfast', I am looking right at you), this notion of just about every kind of organization becoming much more reliant, dependent, and committed to more and more advanced technologies as a means to survive, compete, and thrive still seems valid to me.

    Can you think of any business, small, medium, or large, that has not had its processes, products, services, communications, administration, customer service, and marketing significantly impacted by new technology in the last decade? Aside from perhaps a few of the very smallest, local service businesses, I can't really think of any. And even those kinds of places, say like a local barbershop or pizza joint, are likely to have a 'Follow us on Facebook' or a 'Find us on Yelp' sticker in the window.

    I thought about this idea, of every company being a technology company, again recently when I saw this piece on Business Insider - 'Goldman Sachs made a big hire from Amazon to lead its Artificial Intelligence efforts'. While it isn't surprising or revealing at all to think of a giant financial institution like Goldman being transformed by technology like so many other firms in all industries, this specific focus on AI technology is I think worth noting.

    Here's an excerpt from the piece:

    Goldman Sachs has hired a senior employee from Amazon to run the bank's artificial-intelligence efforts.

    Charles Elkan has joined Goldman Sachs as a managing director leading the firm's machine learning and AI strategies, according to an internal memo viewed by Business Insider.

    Elkan comes from Amazon, where he was responsible for the Artificial Intelligence Laboratory at Amazon Web Services, according to the memo. He previously led the retailing giant's Seattle-based central machine-learning team.

    "In this role, Charles will build and lead a center of excellence to drive machine learning and artificial intelligence strategy and automation, "Elisha Wiesel, Goldman Sachs' chief information officer, wrote in the memo. "Charles will work in partnership with teams across the firm looking to apply leading techniques in their businesses."

    The key element I think to the announcement of Goldman's new AI hire is meant to work with groups across the entire business in order to find ways to apply AI and Machine Learning technologies. Almost as if Goldman is not looking to create the 'AI Department' akin to the classic 'IT Department' that exists in just about every company, but rather to find ways to infuse specific kinds of tech and tech approaches all over the company.

    And thinking about AI in that way, much differently to how most companies have looked at most of the major technological advances in the past is what leads me back to the question and title of the post. If the Goldman, (and plenty of other companies too) example of looking for ways to embed AI technology and techniques all across their businesses, then it is not really a stretch to suggest at least in some ways they are seeking to become 'an AI company' at their core.

    What's been the most significant single technology advance in the last 25 years or so that has done more to change how work and business get done?

    Email?

    The web?

    Mobile phones?

    Probably some combination of these three I would bet. And has any company you have known decided to 'brand' or consider themselves 'an email company?' Or a 'mobile phone' company? 

    Not really, these were just tools to try and get better, more efficient, more profitable being whatever kind of company they really were.

    So I think the answer to the 'AI question' for Goldman, or for anyone else going all in with AI at the moment is 'No', we aren't really trying become an Artificial Intelligence company. We probably should just consider AI and its potential as just another set of tools that can be leveraged in support of what it is we are really trying to do.

    Even if it is tempting to try and create the latest management/workplace axiom.

    Have a great week!