Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in Technology (408)

    Monday
    May142018

    Questions to ask before letting an algorithm make HR decisions

    Nearing the halfway mark in 2018 and I am ready to call it right now - the topic/trend that has and will continue to dominate the HR and HR technology discussion this year is Artificial Intelligence or AI.

    I will accept my share of the responsibility and blame for this no doubt. I have hit the topic numerous times on the blog, I have programmed at least seven sessions (or more) featuring AI topics for the upcoming HR Technology Conference, and the subject comes up on just about every HR Happy Hour Podcast at one point or another. In fact, one of my favorite HR Happy Hour Shows this year was the conversation I had with author and professor Joshua Gans on his new book titled Prediction Machines: The Simple Economics of Artificial Intelligence.

    So if you are thinking that everyone in HR and HR tech is all in on AI you'd probably be right. And yet even with all the attention and hype, at some level I still wonder if we are talking about AI in HR enough. Or more specifically, are we talking about the important issues in AI, and are we asking the right questions before we deploy AI for HR decision making?

    I thought about this again after reading an excellent piece on this very topic, titled 'Math Can't Solve Everything:Questions We Need to be Asking Before Deciding an Algorithm is the Answer' on the Electronic Frontier Foundation site. In this piece, (and you really should read it all), the authors lay out five questions that organizations should consider before turning to AI and algorithms for decision support purposes.

    Let's take a quick look at the five questions that HR leaders should be aware of and think about, and by way of example, examine how these questions might be assessed in the context of one common 'AI in HR' use case - applying an algorithm to rank job candidates and decide which candidates to interview and consider.

    1. Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives?

    In the piece on EFF, the main example or warning cited when AI-driven processes might negatively impact people's lives is in the use of an algorithm called Compas, which has been used to predict convicted criminals likelihood to become repeat offenders. The potential danger is when the Compas score influences a judge to issue a longer prison sentence to someone the algorithm suggests is likely to repeat offend. But what if Compas is wrong? Then the convicted offender ends up spending more time than they should have in prison. So this is a huge issue in the criminal justice system.

    In our HR example, the stakes are not quite so high, but they still matter. When algorithms or AI is used to rank job candidates and select candidates for interviews, those candidates who are not selected, or are rated poorly, are certainly negatively impacted by the loss of the opportunity to be considered for employment. That does not mean the AI is 'wrong' or bad necessarily, but just that HR leaders need to be open and honest that this kind of AI will certainly impact some people in a negative manner.

    With that established, we can look at the remaining questions to consider when deploying AI in HR.

    2. Can the available data actually lead to a good outcome?

    Any algorithm relies on input data, and the 'right' input data, in order to produce accurate predictions and outcomes. In our AI in HR example, leaders deploying these technologies need to take time to assess the kinds of input data about candidates that are available and that the algorithm is considering, when determining things like rankings and recommendations. This is when we have to ask ourselves additional questions on correlation vs. causation and whether or not one data point is a genuine and valid proxy for another outcome.

    In the candidate evaluation example, if the algorithm is assessing things like educational achievement or past industry experience of a candidate, are we sure that this data is indeed predictive of success for a candidate in a specific job? Again, I am not contending that we can't know which data elements are indeed predictive and valid, but that we should know them, (or at least have really strong evidence they are likely to be valid).

    3. Is the algorithm fair?

    At the most basic level, and the one that has the most applicability for our AI in HR example, HR leaders deploying AI have to assess whether or not the AI is fair - and the simplest way is to review if the algorithm is treating like groups similarly or disparately? Many organizations are turning to AI-powered candidate assessment and ranking processes to try to remove human bias from the hiring process and attempt to ensure fairness. HR leaders, along with their technology and provider partners have the challenge and responsibility to validate this is actually happening. 'Fairness' is a simple concept to grasp, but can be extremely hard to prove, but one that is inherently necessary in order for AI and algorithms to drive organizational and even societal outcomes. There is a lot more we can do to break this down, but for now, let's be sure we know we have, in HR, to ask this question early and often in the AI conversation.

    4. How will the results (really) be used by humans?

    If you deploy AI and algorithms for the purposes of ranking candidates, how will you use the AI-generated rankings? Will they be the sole determinant of which candidates get called for interviews, advance in the hiring process, and ultimately have a chance at an offer? Or will the AI rankings be just a part of the consideration and evaluation criteria for candidates, to be supplemented by 'human' review and judgement?

    One of the ways the authors of the EFF piece suggest to ensure that human judgement is still a part of the process, is to engineer the algorithms in such a manner that they don't produce a single numerical value, like a candidate ranking score, but rather a narrative report and review of the candidate that a human HR person or recruiter would have to review. In that review, they would naturally apply some of their own human judgement. Bottom line, if your AI is meant to supplement humans and not replace them, you have to take active steps to ensure that is indeed the case in the organization.

    5. Will people affected by these decisions have any influence over the system?

    This final question is perhaps the trickiest one to answer for our AI in HR example. Job candidates who are not selected for interviews as a result of a poor or lower relative AI-driven ranking, will almost always have very little ability to influence the system or process. But rejected candidates often have valid questions as to why they were not considered for interviews and seek advice on how they could work to strengthen their skills and experiences in order to improve their chances for future opportunities. In this case, it would be important for HR leaders to have enough trust and visibility into the workings of the algorithm in order to precisely understand where any given candidate was ranked poorly. This ability to see the levers of the algorithm at work, and be able to share them in a clear and understandable manner is what HR leaders have to push their technology partners on, and be able to provide when needed.

    As we continue to discuss and deploy AI in HR processes, we have to also continue to evaluate these systems and ask these and other important questions. HR decisions are big decisions. They impact people's lives in important and profound ways. They are not to be taken lightly. And if some level of these decisions are to be trusted to an algorithm, then HR leaders have to hold that algorithm (and themselves), accountable.

    Have a great week!

    Thursday
    May102018

    PODCAST: #HRHappyHour - Oracle Spotlight: Innovation in HCM Technology

    HR Happy Hour - Oracle Spotlight - Episode 2: Innovation in HCM Technology

    Hosts: Steve BoeseTrish McFarlane

    Guest: Gretchen Alarcon, Group Vice President, Product Strategy, Oracle

    Listen HERE

    This week on the HR Happy Hour Show, hosts Steve Boese and Trish McFarlane continue a special series of podcasts with our friends at Oracle HCM. On Episode 2, we are joined by Gretchen Alarcon from Oracle to talk about innovation in HCM technology, and how HR leaders can best position themselves and their organizations to take advantage of these innovations. On the show, we talk the importance and impact of migrating HCM solutions to the cloud, the emerging influence of AI and machine learning in HCM technology and what that means for HR, and how user focus and user experience are driving much of the most exciting innovations in HCM technology.

    This was a really interesting conversation and one we will build on in upcoming episodes of the Oracle Spotlight series.

    You can listen to the show on the show page HERE, on your favorite podcast app, or by using the widget player below:

    Thanks to Gretchen for joining us and thanks to our friends at Oracle HCM for making this series happen.

    Subscribe to the HR Happy Hour Show on Apple Podcasts, Stitcher Radio, or wherever you get your podcasts - just search for 'HR Happy Hour'.

    Wednesday
    May022018

    #HRTechConf Update: Submissions Open for Awesome New Technology and Discovering the Next Great HR Tech Company

    NOTE: I had an important HR Technology Conference update that I posted yesterday over on the the Conference's HR Tech Insiders blog, but I did want to cross-post here too, to make sure any and all interested HR technology companies and solution providers had the news. Thanks!

    From HR Tech Insiders...

    Attention HR Technology Solution Providers: Submissions to be considered for the annual HR Technology® Conference and Exhibition "Awesome New Technologies for HR" and "Discovering the Next Great HR Technology Company" sessions are being accepted and can be submitted on the HR Tech website HERE.

    In case you are new to these sessions, here is what they are, how they work, and who is eligible for consideration for each session.

    "Awesome New Technologies for HR" showcases larger, more established HR tech solution providers, (publicly traded, been in the market for several years, maybe running TV spots on CNBC, etc.), who are invited to submit their latest, most innovative solutions for consideration. These can be new modules for an existing platform, a reinvention of one or more of their solutions, or something totally new and unique in the HR tech market. During the summer, I will review, arrange demonstrations, and select 5 or 6 solution providers to present for 10 minutes on our main stage at HR Tech and be recognized as an "Awesome New Technology for HR" for 2018.

    "Discovering the Next Great HR Technology Company" is for the startups, less-established, or emerging HR tech solution providers in the space, and works a little differently than "Awesome New Technologies." From the submissions we receive on our website, HR Tech works with a group of industry experts -  George LaRocque, Principal Analyst and Founder of HRWins , Madeline Laurano, Founder and Principal Analyst of Aptitude Research Partners., Ben Eubanks, Principal Analyst of Lighthouse Research & Advisory, and Lance Haun, Practice Director for The Starr Conspiracy to select eight semi-finalist HR tech solution providers.

    Then, during the summer our analyst coaches will work with the eight semi-finalists to hone their messaging and demonstrations, and will be posting videos and additional information about the semi-finalist startups.

    In July and August we will be looking to you, the HR Tech Insiders audience, to vote online on the HR Tech Insiders site and help us select from these eight semi-finalists, the four finalists that will get to present to the audience at the conference in Las Vegas in September. And in a new wrinkle for 2018, the four finalists will be joined by a fifth company - the winner of the 1st Annual HR Technology Conference Pitchfest which will take place during the Conference. Finally, this will culminate in live demonstrations from the five finalists on our main stage after which Conference attendees will select the Next Great HR Technology Company for 2018 live in Vegas!

    We encourage all interested HR technology solution providers for either session to submit an entry for consideration here. The application deadline is Friday, June 29th, so don't wait too long to submit.

    I can't wait to review the submissions and see all the incredible HR technology innovation I know is out there!

    Tuesday
    May012018

    Emotional surveillance - coming to a workplace near you?

    I am going to submit today's dispatch from the HR Happy Hour Home Office without much commentary, as like many tech-driven developments we hear about, this one is probably too extreme to have much of an effect in the US or any of the other places where readers of this blog reside, (Hi Canada!).

    From one of my favorite sources on all things going on in business in China, the South China Morning Post, here is a little bit of a piece titled 'Forget the Facebook leak: China is mining data directly from worker's brains on an industrial scale':

    Workers outfitted in uniforms staff lines producing sophisticated equipment for telecommunication and other industrial sectors.

    But there’s one big difference – the workers wear caps to monitor their brainwaves, data that management then uses to adjust the pace of production and redesign workflows, according to the company.

    The company said it could increase the overall efficiency of the workers by manipulating the frequency and length of break times to reduce mental stress.

    Hangzhou Zhongheng Electric is just one example of the large-scale application of brain surveillance devices to monitor people’s emotions and other mental activities in the workplace, according to scientists and companies involved in the government-backed projects.

    Concealed in regular safety helmets or uniform hats, these lightweight, wireless sensors constantly monitor the wearer’s brainwaves and stream the data to computers that use artificial intelligence algorithms to detect emotional spikes such as depression, anxiety or rage.

    The technology is in widespread use around the world but China has applied it on an unprecedented scale in factories, public transport, state-owned companies and the military to increase the competitiveness of its manufacturing industry and to maintain social stability.

    Wow, pretty wild, fairly extreme - even by the looser standards for what is ok and not ok in the workplace that still prevail in most of China.

    But here's the interesting thing, we all have already come to accept certain kinds of monitoring in the workplace. We make hourly workers punch in and punch out every day, (and remind them to be sure to punch out before taking lunch). All kinds of call center representatives have their calls and interactions with customers reviewed and even listened to in real time by supervisors. Warehouse workers are often subjected to really close and detailed kinds of monitoring - how fast they find items for an order, how many errors they make per shift, and how closely they achieve "goal" performance each week.

    Ever white collar jobs are subject at times to really close monitoring and supervision. Most lawyers and consultants are still billing by the hour, so they must keep and have reviewed detailed time and activity logs. Many organizations require receipts for every dollar spent on employee travel in order for the employee to get reimbursed. Are you sure you had that Dunkin' coffee for $2.65? Even the rise and increasing popularity of workplace chat apps like Slack have created more environments where your 'status', i.e. are you currently working, is visible to everyone and monitored by most.

    The point being that sure, this idea of monitoring employee brainwaves in real time, or as one Chinese official described it, conducting 'emotional surveillance' seems ludicrous, it can also be seen as just the next, tech-enabled step on a path that lots of organizations are already walking. And the deployment of these kinds of technologies for workers in dangerous, important roles like airline pilot or high-speed train operator could offer another level of safety for the public - a pilot judged to be in an emotional state prior to takeoff could be pulled from the flight as a precaution.

    I don't have a great, insightful conclusion to this story at the moment only to say that while it is inevitable that technologies will continue to advance, and offer better, more, and more personal information about workers, it is (hopefully), going to be the role of smart HR people to help guide organizations as to the best, fairest, and 'right' use of these kinds of tool. The pilot on the above flight is not just a pattern of brainwaves after all. He/she is an actual human.

    Have a great day!

    Wednesday
    Apr182018

    Job Titles of the Future: Technology Ambassador

    Traditionally the institutions that have wielded power and influence and have amassed significant wealth and made an impact on people's lives were governments, (and to some extent religions). When a country's government enacts a policy or issues some new set of rules and regulations, this tends to have an outsize impact and effect on the people of that country, and if the country is big and influential enough, can even impact people all over the world. Just one recent example - the US/China trade and tariff disputes have sent global equity markets on a kind of wild, roller coaster ride lately - effecting markets and wealth of people all over the world.

    To a less direct, but by no means insignificant degree, changes in direction or policy from important religious organizations can effect people all over the world. The major world religions are not confined within one country or even two or three - they have followers all across the globe. If tomorrow the Pope issued a decree that, say, women are not allowed to become priests, that news would stir congregations in two hundred countries.

    But increasingly there is another set of globe spanning institutions that have perhaps even more worldwide influence and importance in people's lives and in commerce than say do most individual countries or single religions - the world's largest technology companies. Think Facebook with its 2 billion users. Google with its incredible reach and dominance in web search and mobile phone operating systems. Amazon with its seemingly inexorable march to dominate e-commerce and cloud computing. Apple, with more cash on hand than many small countries. And we haven't even mentioned the Chinese tech giants like Alibaba and Tencent. In China, Tencent's WeChat impacts everything - news, communication, shopping, banking, and more.

    The world's largest tech companies are in some ways like countries or religions themselves. Their users are like citizens, their terms of service, methods of interaction, rules of engagement, codes of conduct, and unique cultures and sub-cultures offer similarities to the framework of large, religous organizations. Their influence on global economics and societies cannot be underestimated. Just like global trade disputes have roiled financial markets in recent days, so has the Facebook data security drama and fallout. At this stage, who would argue that Facebook founder Mark Zuckerberg isn't one of the world's most important people?

    So all that leads back to the title of the piece, the latest installment of the often imitated, but never surpassed 'Job Titles of the Future' series. The job title is Technology Ambassador and the details of this job come to us from the country of Denmark, who, as far as I can tell, are the first and only nation to create and name an official 'Technology Ambassador' for the country. 

    What does a Technology Ambassador do? Some details from a piece on Wired UK:

    Ambassadors are traditionally staid public officials, holed up in grand embassies in the farthest-flung corners of the world. Their job? Schmoozing the powerful, smoothing over tricky arguments and promoting their country. "Diplomacy has always been about putting people in outposts where there have been new activities and events - be it in conflict areas, or where innovation, creativity and new technology is influencing our ways of life," explains Casper Klynge, who has just taken up the role as the first ever ambassador to Silicon Valley.

    The job came about when Denmark's Foreign Office decided to create the post of what was then called a "Google ambassador", who would interact with the tech giants. The role was officially created in February; Klynge was appointed a few months later. In late August, he moved to California and into his Palo Alto embassy, where he plans to build a team of more than a dozen staff, supported by a back-office secretary and a number of tech attachés around the world - the first of which will be based in Klynge's old stomping ground, Asia.

    The most important role he has as ambassador shows just how much the world has changed in recent years: he's there to meet Silicon Valley's biggest companies in exactly the same way he has previously met with prime ministers and presidents. "We need to build those relationships because of the key influence these companies have over our daily lives," he says, "and, at the end of the day, over foreign policy and international affairs."

    This appointment of a Technology Ambassador show Denmark's really progressive, prescient, and probably soon to be copied approach to their nation's relationships with 'Big Tech' by other countries in the near future.

    These companies seem to only be growing in influence - signing on more and more of the world's population, developing ever more and more convenient capabilities and features to keep users engaged, and expanding into more areas of daily life. Think about it this way - what institution or entity is more influential on a macro basis, Facebook or a country like Denmark?

    These are certainly interesting times, tech companies have more users than most countries have citizens or religions have adherents. And unlike most counties and religions, their size and influence seems to still be trending higher. Denmark's decision to think about and treat Big Tech like nations have traditionally considered other nations is also incredibly interesting too. And a one of the most interesting 'Job Titles of the Future' we have came across yet.

    Have a great day!