Quantcast
Subscribe!

 

Enter your email address:

Delivered by FeedBurner

 

E-mail Steve
This form does not yet contain any fields.
    Listen to internet radio with Steve Boese on Blog Talk Radio

    free counters

    Twitter Feed

    Entries in Big Data (22)

    Tuesday
    Feb242015

    On trusting algorithms, even when they make mistakes

    Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

    From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

    Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

    But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

    Why is this a problem, this algorithm aversion?

    Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

    What might this mean for you in HR/Talent?

    As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

    That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

    The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

    The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

    That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

    What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

    The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

    Happy Tuesday.

    Tuesday
    Jan132015

    What Will Happen if we Move the Company: The Limits of Data

    Some years back in a prior career (and life) I was running HR technology for a mid-size organization that at the time had maybe 5,000 employees scattered across the country with the largest number located on site at the suburban HQ campus (where I was also located). The HQ was typical of thousands of similar corporate office parks - in an upscale area, close to plenty of shops and services, about one mile from the expressway, and nearby to many desirable towns in which most of the employees lived. In short, it was a perfectly fine place to work close to many perfectly fine places to live.

    But since in modern business things can never stay in place for very long, a new wrinkle was introduced to the organization and its people - the looming likelihood of a corporate relocation from the suburban, grassy office park to a new corporate HQ to be constructed downtown, in the center of the city. The proposed new HQ building would be about 15 miles from the existing HQ, consolidate several locations in the area into one, and come with some amount of state/local tax incentives making the investment seem attractive to company leaders. Additionally, the building would be owned vs. leased, allowing the company to purpose-design the facility according to our specific needs, which, (in theory), would increase overall efficiency and improve productivity. So a win-win all around, right?

    Well as could be expected once news of the potential corporate HQ relocation made the rounds across the employee population, complaints, criticism, and even open discussions of 'time to start looking for a different job' conversations began. Many employees were not at all happy about the possible increase in their commuting time, the need to drive into the 'scary' center city location each day, the lack of easy shopping and other service options nearby, and overall, the change that was being foisted upon them.

    So while we in HR knew (or at least we thought we knew), there would be some HR/talent repercussions if indeed the corporate HQ was relocated, we were kind of at a loss to quantify or predict what these repercussions would be. The best we were able to do, (beyond conversations with some managers about what their teams were saying), was to generate some data about the net change in commuting distance for employees, using a simple and open-source Google maps based tool.

    With that data we were able to show that (as expected), some employees would be adversely impacted in terms of commuting distance and some would actually benefit from the HQ move. But that was about as far as we got with our 'data'.

    What we didn't really dive into (and we could have even with our crude set of technology), was break down these impacts by organization, by function, by 'top' performer level, by 'who is going to be impossible to replace if they leave' criteria.

    What we couldn't do with this data was estimate just how much attrition was indeed likely to occur if the move was executed. We really needed to have an idea, (beyond casual conversations and rumor), who and from what areas we might find ourselves under real pressure due to possible resignations. 

    And finally, we had no real idea what remedial actions we might consider to try and stave off the voluntary and regrettable separations (the level of which we didn't really know).

    We basically looked at our extremely limited data set and said, 'That's interesting. What do we do with it?'

    Why re-tell this old story? Because someone recently asked me what was the difference between data, analytics, and 2015's hot topic, predictive analytics. And when I was trying to come up with a clever answer, (and I never really did), I thought of this story of the corporate relocation.

    We had lots of data - the locations of the current campus and the proposed new HQ. We also had the addresses of all the employees. We had all of their 'HR' data - titles, tenure, salary, department, performance rating, etc.

    We kind of took a stab at some analytics - which groups would be impacted the most, what that might mean for certain important areas, etc. But we didn't really produce much insight from the data.

    But we had nothing in terms of predictive analytics - we really had no idea what was actually going to happen with attrition and performance if the HQ was moved, and we definitely had no ideas or insights as to what to do about any of that. And really that was always going to be really hard to get at - how could we truly predict individual's decisions based on a set of data and an external influence that had never happened before in our company, and consequently any 'predictions' we made could not have been vetted at all against experience or history?

    So that's my story about data, analytics, and predictive analytics and is just one simple example from the field on why this stuff is going to be hard to implement, at least for a little while longer.

    Friday
    Jun272014

    TOP HR DATA PLAY: Kill the FTE

    I had a fun time riding shotgun to Kris Dunn yesterday on the Fistful of Talent Webinar titledHR Moneyball:  The FOT Bootstrapper Guide To Getting Started With Big Data, in which KD and I took a look at some the ways that HR/Talent pros can use Big Data and Business Intelligence approaches to raise their games and drive the adoption of so-called 'Data-driven HR' in their organizations.

    Of the five 'Big Data' plays in the FOT playbook, I think the one that I dig the most was #3, an idea called 'Salary Cap Utilization'. The basic idea is this - take a play from the world of sports leagues like the NBA and NFL that force teams to operate under a set of rules that govern maximum total player compensation, (the 'Cap'), and apply it inside your organization.

    I know what you are saying, that we already do that, it's called the Annual Salary Budget. We've been managing compensation that way forever. Each budget holding group or manager is allotted 'X' amount of dollars he/she can 'spend' on total comp for the year and they (probably subject to a dozen other HR rules around increase percentages, salary bands, etc.), have to sort out how that salary budget is allocated among their staffs.

    But chances are you are placing an additional, and probably unnecessary constraint on your managers as well - something called the full-time equivalent (FTE) budget.

    The FTE budget tells managers that in addition to the maximum amount of $$ you can spend on comp (The Salary Cap), there is some (kind of arbitrary) maximum number of headcount that you can spend your Salary Cap on, i.e., the FTE budget.

    When I first moved into an HR role, managing the HR systems at a mid-sized company, and first encountered the acronym FTE, I had to ask someone to explain it to me, as I had never seen it before. It seemed like a made-up kind of a construct, especially when you have to spend time breaking down and trying to convert worker schedules into their 'full-time' equivalents. And what, really, is 'full-time' anyway? That too, is kind of an arbitrary measurement to some degree.

    But $$ are not arbitrary and are not subject to interpretation or manipulation. Everyone understands what a dollar-based budget means.

    What are the advantages of dropping the FTE budget/constraint from your playbook?

    1. It gives leaders/managers more autonomy on how they allocate compensation across teams. Instead of operating under the dual constraints of 'heads' and $$, they simply have to make it work within the Cap. Need to makes some big changes to reinvent their department? Make it work under the Cap. Want to expand into something new? What can you give up to stay within the Cap? Have 5 all-star, 'A' players that need to get paid or they will walk out the door? Then pay them, just be ready to make the cuts elsewhere to remain within the Cap. 

    2. It forces the organization to be more flexible. The overwhelming tendency in an FTE-influenced budgeting scheme is for managers to guard 'their' FTEs like grim death. Have a position sit open or vacant for too long and managers will scramble to fill it with just about anyone, just so they don't 'lose' that precious FTE in the next budgeting cycle. Have a solid employee that wants to transfer out to a role in a different department? A role  that might better suit their skills and enhance their career development? Better be willing to give up an FTE buddy to make that happen.

    3. It allows HR pros to be more consultative and progressive when talking about things like merit increases, equity increases, offers above salary band maximums, counter-offers, retention bonuses, and most everything related to comp. Remove that FTE constraint and now more of the comp game is open for discussion and adaptation. HR is working with the business around what is important to the business - the relative cost of performance and how to get the most production from available resources. HR can now be in the game of reporting/advising on Salary Cap Utilization instead of counting up heads, something that in most instances does not really matter.

    We had a few other Big Data plays that we shared in the Webinar that were pretty neat as well (Hiring Manager batting average, turnover prediction, Health Care claims per capita), but for me eliminating the FTE might be the simplest and easiest one to get started with. 

    Have a great weekend!

    Thursday
    Jun192014

    WEBINAR: HR Moneyball: How to get started with Big Data for HR

    You have heard the hype: Big Data is taking over the business world, and HR’s going to be expected to make decisions—not through feelings, relationships or gut instinct—but via numbers.  The problem is… your HRIS, ATS and Performance Solutions are all different systems and weren’t built with the big-data revolution in mind. In short, you feel less than ready for workforce analytics—you’re just trying to get the basic reports generated.

    We feel your pain, people. That’s why I am glad to participate in the June installment of the Fistful of Talent FREE webinar series with a jam titled, HR Moneyball:  The FOT Bootstrapper Guide To Getting Started With Big Data. Join Kris Dunn and I Data nerdfor this webinar on Thursday, June 26 at 2pm EST(sponsored by ThoughtSpot, a cool business intelligence startup), and we’ll share the following goodies with you:

    A brief review of where HR stands with Business Intelligence (BI)/Big Data. We’ll cover some of the trends, what the bleeding edge is doing, the 3 types of data sources available to HR shops and what the CEOs and business leaders you support are asking for related to data and BI out of the HR Function. We’ll also talk about what your options are when HR is the last priority for an over-burdened IT function.

    Why HR pros need to shift/lean forward. It’s not what happened, it’s what going to happen. Getting your head around business intelligence and data means you have to shift your focus from reporting the past and move to predictive analytics. We’ll give you examples of great reporting decks from the HR Hall of Fame and tell you how they have to change to meet the call from predictive analytics out of your HR shop.

    - The Five Best HR Plays for Business Intelligence (BI) and Big Data. Since we’re all about helping you win, we wouldn’t do this webinar without giving you some great ideas for where to start with a data play out of your shop. You’re going to stop reporting turnover and start predicting it. You’re going to stop reporting time to fill and start showing which hiring managers are great at—you guessed it—hiring.  We’ll give you five great ideas and show you how to get started piecing the story together.

    - A primer on what’s next once you start channeling Nostradamus. Since you specialize in people, you naturally understand the move to using Business Intelligence (BI)/Big Data that helps you predict the future is only half the battle—you have to have a plan once the predictions are made. We’ll help you understand the natural applications for using your business-intelligence data as both a hammer and a hug—to get people who need to change moving, and to embrace those that truly want your help as a partner.

    You’re a quality HR pro who knows how to get things done. Join KD and I on Thursday, June 26 at 2pm EST for HR Moneyball: The FOT Bootstrapper Guide To Getting Started With Big Data and we’ll help you understand how to deploy Moneyball principles in HR that allow you to use predictive Big Data to position yourself as the expert you are.  

    Hope you can join us on June 26 at 2PM EST.

    Wednesday
    Mar052014

    Making sense of all that data

    Quick shot, or rather a question for a snowy Wednesday which is this:

    Just how are HR and talent leaders at organizations going to make sense of what is already the dramatic increase in workforce data from all the new and disparate sources that are now or will become available?

    If you think the answer is the deployment of more software tools for creating charts, dashboards, graphics, or better visualizations of that data you might be right. Or at least partly right.

    But it could be that you have already spent time and resources on these kinds of analytics tools and still find that there is a gap between the raw data and the insights you need to derive from that data. Maybe more charts and graphs are not the answer after all. Maybe charts and graphs are not enough.

    But a new company called Narrative Science offers a hint about what the next step might be in data analysis technology with a solution they call Quill.

    Quill is designed to examine raw data, apply complex artificial intelligence algorithms to the data, extract and organize key facts and insights from the data, and finally present that analyses of the data in a narrative, natural language format to the end user.

    So instead of looking at another bar chart with a trend line or a scatter plot that leaves your mind sort of scattered, the Quill system presents a key set of interpretations, conclusions and even talking points for the users (and communicators) of the data.

    Take a look at the video below from Narrative Science to see Quill in action, in the context of an investor's portfolio analysis, and think about how it seems reasonable or possible that a similar data analysis and narrative overlay could be done on all manner of HR, talent, and workforce data (Email and RSS subscribers will need to click through)

    Pretty cool, right? And likely not that terribly complex once some underlying assumptions are put down.

    The financial advisor gets the 'right' talking points and conclusions based on the data and the investor's profile and goals, then he/she can spend more time talking about their go-forward strategy and less time just trying to figure out what the data means. And the advisor can handle more clients too, which is certainly good for the investment firm's bottom line. Surely this has a parallel to the front-line supervisor in any field that has a dozen or more direct reports to keep on track on a daily, weekly, monthly basis.

    But this kind of narrative analysis cuts out one of the chief problems of trying to implement a more data-driven decision making environment, which is answering, simply, the question of 'Just what is all this data actually telling us?'

    I am not sure whether or not Narrative Science has HR or HCM data analysis capability on the product roadmap for Quill, but I bet even if they don't, we will see this kind of capability in the HCM space sooner or later.

    Or maybe some enterprising HCM solution provider is already doing this, and if so, I hope they submit their solution to the Awesome New Technologies for HR process for HR Tech in October!