Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.

    free counters

    Twitter Feed

    Entries in Human Resources (86)

    Thursday
    Jun212018

    PODCAST: #HRHappyHour 325 - Connecting Technology, Data and the Employee Experience

    HR Happy Hour 325 - Connecting Technology, Data, and the Employee Experience

    Hosts: Steve BoeseTrish McFarlane

    Guest: Scott Marcotte, Conduent HR Services

    Sponsored by Virgin Pulse - www.virginpulse.com

    LISTEN HERE

    This week on the HR Happy Hour Show, Steve and Trish were joined by Scot Marcotte, Client Technology Leader from Conduent HR Services, a digital interactions company that serves HR organizations as an 'aggregator of experiences', to help HR organizations deliver better services by bringing together, data, insights, and helping employees be more informed of the services and programs for health, wealth, and career programs that are available to them.

    On the show, we talked about some of the major technology and organizational changes that are driving and shaping how leading HR organizations are providing services to their employees and helping the organization achieve their desired business outcomes. Some of the topics we talked about included the focus on employee wellbeing, how wellbeing can be a strategic talent advantage, employee financial wellness (and how that can impact talent management), how HR organizations can make sense of all the data now available to them, employee data security, and much more. 

    We also had a quick update from the suburbs - pool installation, snakes on the porch, and rogue cats. And how the 'EAP' plan needs a re-branding.

    You can listen to the show on the show page HERE, on your favorite podcast app, or by using the widget player below:

    This was a fun and interesting show, thanks Scot for joining us.

    Remember to subscribe to the HR Happy Hour Show on Apple Podcasts, Stitcher Radio, Google Podcasts (new), or wherever you get your podcasts - just search for 'HR Happy Hour.'

    Tuesday
    Jun122018

    Balancing data and judgment in HR decision making

    A few weeks ago I did an HR Happy Hour Show with Joshua Gans, co-author of the excellent book Prediction Machines. On the show, we talked about one of the central ideas in the book - the continuing importance of human judgment in decision making, even in an environment where advances in AI technology make predictions (essentially options) more available, numerous, and inexpensive.

    I won't go back through all the reasoning behind this conclusion, I encourage you to listen to the podcast and/or read the book for that, but I did want to point out another excellent example of how this AI and prediction combined with human judgment idea plays out in human capital management planning and decisions. A recent piece in HBR titled Research: When Retail Workers Have Stable Schedules, Sales and Productivity Go Up shares some really interesting findings about a study that aimed to find out if giving retail workers more schedule certainty and clarity would impact business results, and if so, how?

    Some back story on the idea behind the study first. As demand planning and workforce scheduling software has developed over the years, and become much more sophisticated, many retailers now have the information and ability to set and adjust worker schedules much more dynamically, and almost in real time, than they had in the past. Combining sales and store traffic estimates with workforce planning and scheduling tools that are able to match staffing levels to this demand - store managers are, for the most part, able to optimize staffing, (and therefore control labor costs), much more precisely.

    But while optimizing the staffing levels in a retail store sounds like a sound business practice, and makes the owners of the store happy (typically via reduced labor costs), it also often make the staff unhappy. In a software and AI driven staffing model, workers can find their schedules uncertain, changing from week to week, and even find themselves losing expected shifts on very short notice, sometimes less than two hours.

    The data and the AI might be 'right' when they recommend a set of staff schedules based on all the available information, but, as we will see in the research referenced in the HBR piece, the data and the AI usually fail to see and understand the impact this kind of scheduling has on the actual people that have to do the actual work.

    You really should read the whole piece on HBR, but I want to share the money quote here - what the researchers found or recommended would be the best way for a retailer to incorporate these kinds of advanced AI tools to help set retail store worker schedules:

    At the start of the study, we often heard HQ fault store managers for “emotional scheduling” — a script pushed by the purveyors of scheduling software. “In measuring customer experience and making decisions related to a labor model, retailers should rely solely on facts. Too often, changes are made because of an anecdotal or emotional response from the field,” notes a best practices guide from Kronos.

    However, our experiment shows that a hybrid approach of combining algorithms with manager intuition can lead to better staffing decisions. While our experiment provided guidelines for managers, it still allowed the managers to make the final decision on how much of the interventions to implement. The increase in sales and productivity witnessed at the Gap shows that retailers stand to benefit when they allow discretion to store managers.

    What were some of the benefits of giving managers at least some discretion over scheduling, even when the AI made different recommendations?

    When managers could give more workers more 'certain' or predictable schedules, most of them benefited from ability to predict commute times, ability to schedule things like education, child care, other jobs, and enabled them to connect more deeply with customers and co-workers. In short, they were all happier, and this tended to lead to better work performance, better customer service, and in the case of the stores studied by HBR - increased revenues and profits.

    In time, maybe the AI will learn to understand this, this nuanced, subtle, but important impact that work schedules have on workers, and how that impacts business results. But until then, it seems like it's best to let the AI make recommendations on the optimal staffing decisions, and let the managers make the final call, based on what they know about their staff, their customers, and well, human nature in general.

    Have a great day!

    Friday
    May182018

    PODCAST: HR Happy Hour - Oracle Spotlight - Bringing HR and Finance Together

    HR Happy Hour - Oracle Spotlight - Episode 3: Bringing HR and Finance Together

    Hosts: Steve BoeseTrish McFarlane

    Guest: Gretchen Alarcon, Group Vice President, Product Strategy, Oracle

    Listen HERE

    This week on the HR Happy Hour Show, hosts Steve Boese and Trish McFarlane conclude a special series of podcasts with our friends at Oracle HCM. On Episode 3, we are joined by Gretchen Alarcon from Oracle to talk about how the increasing importance of the relationship between the CHRO and the CFO, how HR leaders and organizations can partner more effectively, and the benefits that organizations are realizing when consolidating administrative systems for HR and Finance on a common cloud platform. 

    Additionally, Steve shared his geeky enthusiasm for finance and accounting, Trish talked about how she worked with the CFO as and HR leader, and Gretchen talked about some of the key technical benefits that common cloud platforms drive for HR leaders.

    You can listen to the show on the show page HERE, on your favorite podcast app, or by using the widget player below:

    This was a really interesting conversation and if you enjoyed the show, make sure to check out our other episodes in the Oracle Spotlight series.

    Thanks to Gretchen for joining us and thanks to our friends at Oracle HCM for making this series happen. Learn more at www.oracle.com/hcm

    Subscribe to the HR Happy Hour Show on Apple Podcasts, Stitcher Radio, or wherever you get your podcasts - just search for 'HR Happy Hour'.

    Monday
    May142018

    Questions to ask before letting an algorithm make HR decisions

    Nearing the halfway mark in 2018 and I am ready to call it right now - the topic/trend that has and will continue to dominate the HR and HR technology discussion this year is Artificial Intelligence or AI.

    I will accept my share of the responsibility and blame for this no doubt. I have hit the topic numerous times on the blog, I have programmed at least seven sessions (or more) featuring AI topics for the upcoming HR Technology Conference, and the subject comes up on just about every HR Happy Hour Podcast at one point or another. In fact, one of my favorite HR Happy Hour Shows this year was the conversation I had with author and professor Joshua Gans on his new book titled Prediction Machines: The Simple Economics of Artificial Intelligence.

    So if you are thinking that everyone in HR and HR tech is all in on AI you'd probably be right. And yet even with all the attention and hype, at some level I still wonder if we are talking about AI in HR enough. Or more specifically, are we talking about the important issues in AI, and are we asking the right questions before we deploy AI for HR decision making?

    I thought about this again after reading an excellent piece on this very topic, titled 'Math Can't Solve Everything:Questions We Need to be Asking Before Deciding an Algorithm is the Answer' on the Electronic Frontier Foundation site. In this piece, (and you really should read it all), the authors lay out five questions that organizations should consider before turning to AI and algorithms for decision support purposes.

    Let's take a quick look at the five questions that HR leaders should be aware of and think about, and by way of example, examine how these questions might be assessed in the context of one common 'AI in HR' use case - applying an algorithm to rank job candidates and decide which candidates to interview and consider.

    1. Will this algorithm influence—or serve as the basis of—decisions with the potential to negatively impact people’s lives?

    In the piece on EFF, the main example or warning cited when AI-driven processes might negatively impact people's lives is in the use of an algorithm called Compas, which has been used to predict convicted criminals likelihood to become repeat offenders. The potential danger is when the Compas score influences a judge to issue a longer prison sentence to someone the algorithm suggests is likely to repeat offend. But what if Compas is wrong? Then the convicted offender ends up spending more time than they should have in prison. So this is a huge issue in the criminal justice system.

    In our HR example, the stakes are not quite so high, but they still matter. When algorithms or AI is used to rank job candidates and select candidates for interviews, those candidates who are not selected, or are rated poorly, are certainly negatively impacted by the loss of the opportunity to be considered for employment. That does not mean the AI is 'wrong' or bad necessarily, but just that HR leaders need to be open and honest that this kind of AI will certainly impact some people in a negative manner.

    With that established, we can look at the remaining questions to consider when deploying AI in HR.

    2. Can the available data actually lead to a good outcome?

    Any algorithm relies on input data, and the 'right' input data, in order to produce accurate predictions and outcomes. In our AI in HR example, leaders deploying these technologies need to take time to assess the kinds of input data about candidates that are available and that the algorithm is considering, when determining things like rankings and recommendations. This is when we have to ask ourselves additional questions on correlation vs. causation and whether or not one data point is a genuine and valid proxy for another outcome.

    In the candidate evaluation example, if the algorithm is assessing things like educational achievement or past industry experience of a candidate, are we sure that this data is indeed predictive of success for a candidate in a specific job? Again, I am not contending that we can't know which data elements are indeed predictive and valid, but that we should know them, (or at least have really strong evidence they are likely to be valid).

    3. Is the algorithm fair?

    At the most basic level, and the one that has the most applicability for our AI in HR example, HR leaders deploying AI have to assess whether or not the AI is fair - and the simplest way is to review if the algorithm is treating like groups similarly or disparately? Many organizations are turning to AI-powered candidate assessment and ranking processes to try to remove human bias from the hiring process and attempt to ensure fairness. HR leaders, along with their technology and provider partners have the challenge and responsibility to validate this is actually happening. 'Fairness' is a simple concept to grasp, but can be extremely hard to prove, but one that is inherently necessary in order for AI and algorithms to drive organizational and even societal outcomes. There is a lot more we can do to break this down, but for now, let's be sure we know we have, in HR, to ask this question early and often in the AI conversation.

    4. How will the results (really) be used by humans?

    If you deploy AI and algorithms for the purposes of ranking candidates, how will you use the AI-generated rankings? Will they be the sole determinant of which candidates get called for interviews, advance in the hiring process, and ultimately have a chance at an offer? Or will the AI rankings be just a part of the consideration and evaluation criteria for candidates, to be supplemented by 'human' review and judgement?

    One of the ways the authors of the EFF piece suggest to ensure that human judgement is still a part of the process, is to engineer the algorithms in such a manner that they don't produce a single numerical value, like a candidate ranking score, but rather a narrative report and review of the candidate that a human HR person or recruiter would have to review. In that review, they would naturally apply some of their own human judgement. Bottom line, if your AI is meant to supplement humans and not replace them, you have to take active steps to ensure that is indeed the case in the organization.

    5. Will people affected by these decisions have any influence over the system?

    This final question is perhaps the trickiest one to answer for our AI in HR example. Job candidates who are not selected for interviews as a result of a poor or lower relative AI-driven ranking, will almost always have very little ability to influence the system or process. But rejected candidates often have valid questions as to why they were not considered for interviews and seek advice on how they could work to strengthen their skills and experiences in order to improve their chances for future opportunities. In this case, it would be important for HR leaders to have enough trust and visibility into the workings of the algorithm in order to precisely understand where any given candidate was ranked poorly. This ability to see the levers of the algorithm at work, and be able to share them in a clear and understandable manner is what HR leaders have to push their technology partners on, and be able to provide when needed.

    As we continue to discuss and deploy AI in HR processes, we have to also continue to evaluate these systems and ask these and other important questions. HR decisions are big decisions. They impact people's lives in important and profound ways. They are not to be taken lightly. And if some level of these decisions are to be trusted to an algorithm, then HR leaders have to hold that algorithm (and themselves), accountable.

    Have a great week!

    Monday
    May072018

    ANNOUNCEMENT: The HR Happy Hour Show on Amazon Alexa - #HRHappyHour

    I have written quite a bit about Amazon, the Alexa platform, and how excited and optimistic I am about voice interfaces for all kinds of workplace applications. I have been so interested in how Alexa, and voice more generally are going to impact and influence workplace tech, that a few months ago I thought it would be fun and instructive to try and learn how organizations and developers can leverage voice in their applications.

    In order to try and have some purpose and structure to this investigation, I set out to achieve a goal - to create and syndicate a short, "Alexa" version of the HR Happy Hour Podcast that would be available to Alexa/Echo users as a part of their device's "Flash Briefing" or the daily news update that many Alexa users listen to once or even multiple times a day.

    Long story short - today I am happy to share that the HR Happy Hour Show is on Alexa - as an Alexa skill that users can add to their Flash Briefing. In the Alexa app on your smartphone, simply search the library of skills for 'HR Happy Hour' to add the Alexa version of the HR Happy Hour Podcast to your daily Flash Briefing. On the HR Happy Hour on Alexa, myself and Trish McFarlane will share news, commentary, opinions, and excerpts from the full HR Happy Hour Podcasts. As always, these will discuss topics and issues about work, workplace technology, management, leadership, and more - basically shorter, tighter versions of what has made the HR Happy Hour Podcast so successful since its debut in 2009.

    So for folks like me who are absolutely addicted to their Echo device, and talk with Alexa more than almost anyone else, please consider adding the HR Happy Hour on Alexa to your daily Flash Briefing.

    As always, we would love your comments, feedback, and suggestions for topics and potential guests for this new version of the HR Happy Hour.

    Thanks as always for your support!