On trusting algorithms, even when they make mistakes
Tuesday, February 24, 2015 at 9:36AM
Steve in Big Data, Technology, Technology, algorithms, automation, data

Some really interesting research from the University of Pennsylvania on our (people's) tendency to lose faith and trust in data forecasting algorithms (or more generally, advanced forms of smart automation), more quickly than we lose faith in other human's capabilities (and our own capabilities), after observing even small errors from the algorithm, and even when seeing evidence that relative to human forecasters, the algorithms are still superior.

From the abstract of Algorithm Aversion: People Erroneously Avoid Algorithms After Seeting Them Err:

Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Let's unpack that some. In the research conducted at Penn, the authors showed that even when given evidence of a statistical algorithm's overall superior performance at predicting a specific outcome (in the paper it was the likelihood of success of MBA program applicants that the humans and the algorithm attempted to predict), most people lost faith and trust in the algorithm, and reverted to their prior, inferior predictive abilities. And in the study, the participants were incentivized to pick the 'best' method of prediction: They were rewarded with a monetary bonus for making the right choice. 

But still, and consistently, the human participants more quickly lost and faith and trust in the algorithm, even when logic suggested they should have selected it over their (and other people's) predictive abilities.

Why is this a problem, this algorithm aversion?

Because while algorithms are proving to be superior at prediction across a wide range of use cases and domains, people can be slow to adopt them. Essentially, whenever prediction errors are likely—as they are in virtually all forecasting tasks—people will be biased against algorithms, because people are more likely to abandon an algorithm than a human judge for making the same mistake.

What might this mean for you in HR/Talent?

As more HR and related processes, functions, and decisions become 'data-driven', it is likely that sometimes, the algorithms we adopt to help make decisions will make mistakes. 

That 'pre-hire' assessment tool will tell you to hire someone who doesn't actually end up beign a good employee.

The 'flight risk' formula will fail to flag an important executive as a risk before they suddenly quit, and head to a competitor.

The statistical model will tell you to raise wages for a subset of workers but after you do, you won't see a corresponding rise in output.

That kind of thing. And once these 'errors' become known, you and your leaders will likely want to stop trusting the data and the algorithms.

What the Penn researchers are saying is that we have much less tolerance for the algorithm's mistakes than we do for our own mistakes. And maintaining that attitude in a world where the algorithms are only getting better, is, well, a mistake in itself.

The study is here, and it is pretty interesting, I recommend it if you are interested in making your organization more data-driven.

Happy Tuesday.

Article originally appeared on Steve's HR Technology (http://steveboese.squarespace.com/).
See website for complete article licensing information.